Test Report: KVM_Linux_crio 19370

                    
                      adc0c841af400141e073e0e45061d84afa6c9617:2024-08-04:35634
                    
                

Test fail (11/215)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-033173 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-033173 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.955717016s)

                                                
                                                
-- stdout --
	* [addons-033173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-033173" primary control-plane node in "addons-033173" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-033173 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-033173 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, nvidia-device-plugin, metrics-server, ingress-dns, inspektor-gadget, helm-tiller, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:03:13.031478  331973 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:03:13.031599  331973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:13.031605  331973 out.go:304] Setting ErrFile to fd 2...
	I0803 23:03:13.031612  331973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:13.031841  331973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:03:13.032480  331973 out.go:298] Setting JSON to false
	I0803 23:03:13.033460  331973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27941,"bootTime":1722698252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:03:13.033551  331973 start.go:139] virtualization: kvm guest
	I0803 23:03:13.035553  331973 out.go:177] * [addons-033173] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:03:13.037094  331973 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:03:13.037124  331973 notify.go:220] Checking for updates...
	I0803 23:03:13.039632  331973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:03:13.041213  331973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:03:13.042685  331973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:03:13.044157  331973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:03:13.045566  331973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:03:13.046965  331973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:03:13.080839  331973 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:03:13.082141  331973 start.go:297] selected driver: kvm2
	I0803 23:03:13.082160  331973 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:03:13.082177  331973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:03:13.083111  331973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:03:13.083186  331973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:03:13.099313  331973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:03:13.099377  331973 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:03:13.099622  331973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:03:13.099652  331973 cni.go:84] Creating CNI manager for ""
	I0803 23:03:13.099660  331973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:03:13.099673  331973 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 23:03:13.099731  331973 start.go:340] cluster config:
	{Name:addons-033173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-033173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:03:13.099850  331973 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:03:13.101932  331973 out.go:177] * Starting "addons-033173" primary control-plane node in "addons-033173" cluster
	I0803 23:03:13.103249  331973 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:03:13.103296  331973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:03:13.103307  331973 cache.go:56] Caching tarball of preloaded images
	I0803 23:03:13.103407  331973 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:03:13.103418  331973 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:03:13.103772  331973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/config.json ...
	I0803 23:03:13.103805  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/config.json: {Name:mk77935e700cc1ade8f2199427f496b09033ce80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:13.103961  331973 start.go:360] acquireMachinesLock for addons-033173: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:03:13.104007  331973 start.go:364] duration metric: took 32.827µs to acquireMachinesLock for "addons-033173"
	I0803 23:03:13.104025  331973 start.go:93] Provisioning new machine with config: &{Name:addons-033173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-033173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:03:13.104096  331973 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:03:13.105598  331973 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0803 23:03:13.105769  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:03:13.105820  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:03:13.121404  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0803 23:03:13.121916  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:03:13.122583  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:03:13.122605  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:03:13.123058  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:03:13.123283  331973 main.go:141] libmachine: (addons-033173) Calling .GetMachineName
	I0803 23:03:13.123489  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:13.123670  331973 start.go:159] libmachine.API.Create for "addons-033173" (driver="kvm2")
	I0803 23:03:13.123706  331973 client.go:168] LocalClient.Create starting
	I0803 23:03:13.123765  331973 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:03:13.229125  331973 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:03:13.311472  331973 main.go:141] libmachine: Running pre-create checks...
	I0803 23:03:13.311499  331973 main.go:141] libmachine: (addons-033173) Calling .PreCreateCheck
	I0803 23:03:13.312151  331973 main.go:141] libmachine: (addons-033173) Calling .GetConfigRaw
	I0803 23:03:13.312645  331973 main.go:141] libmachine: Creating machine...
	I0803 23:03:13.312661  331973 main.go:141] libmachine: (addons-033173) Calling .Create
	I0803 23:03:13.312880  331973 main.go:141] libmachine: (addons-033173) Creating KVM machine...
	I0803 23:03:13.314316  331973 main.go:141] libmachine: (addons-033173) DBG | found existing default KVM network
	I0803 23:03:13.315134  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:13.314915  331995 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0803 23:03:13.315154  331973 main.go:141] libmachine: (addons-033173) DBG | created network xml: 
	I0803 23:03:13.315169  331973 main.go:141] libmachine: (addons-033173) DBG | <network>
	I0803 23:03:13.315177  331973 main.go:141] libmachine: (addons-033173) DBG |   <name>mk-addons-033173</name>
	I0803 23:03:13.315189  331973 main.go:141] libmachine: (addons-033173) DBG |   <dns enable='no'/>
	I0803 23:03:13.315197  331973 main.go:141] libmachine: (addons-033173) DBG |   
	I0803 23:03:13.315210  331973 main.go:141] libmachine: (addons-033173) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 23:03:13.315221  331973 main.go:141] libmachine: (addons-033173) DBG |     <dhcp>
	I0803 23:03:13.315267  331973 main.go:141] libmachine: (addons-033173) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 23:03:13.315291  331973 main.go:141] libmachine: (addons-033173) DBG |     </dhcp>
	I0803 23:03:13.315302  331973 main.go:141] libmachine: (addons-033173) DBG |   </ip>
	I0803 23:03:13.315334  331973 main.go:141] libmachine: (addons-033173) DBG |   
	I0803 23:03:13.315348  331973 main.go:141] libmachine: (addons-033173) DBG | </network>
	I0803 23:03:13.315355  331973 main.go:141] libmachine: (addons-033173) DBG | 
	I0803 23:03:13.320816  331973 main.go:141] libmachine: (addons-033173) DBG | trying to create private KVM network mk-addons-033173 192.168.39.0/24...
	I0803 23:03:13.392901  331973 main.go:141] libmachine: (addons-033173) DBG | private KVM network mk-addons-033173 192.168.39.0/24 created
	I0803 23:03:13.392961  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:13.392849  331995 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:03:13.392996  331973 main.go:141] libmachine: (addons-033173) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173 ...
	I0803 23:03:13.393026  331973 main.go:141] libmachine: (addons-033173) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:03:13.393088  331973 main.go:141] libmachine: (addons-033173) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:03:13.649475  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:13.649314  331995 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa...
	I0803 23:03:13.893196  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:13.893016  331995 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/addons-033173.rawdisk...
	I0803 23:03:13.893234  331973 main.go:141] libmachine: (addons-033173) DBG | Writing magic tar header
	I0803 23:03:13.893250  331973 main.go:141] libmachine: (addons-033173) DBG | Writing SSH key tar header
	I0803 23:03:13.893263  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:13.893136  331995 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173 ...
	I0803 23:03:13.893277  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173
	I0803 23:03:13.893286  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173 (perms=drwx------)
	I0803 23:03:13.893293  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:03:13.893301  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:03:13.893306  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:03:13.893314  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:03:13.893319  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:03:13.893327  331973 main.go:141] libmachine: (addons-033173) DBG | Checking permissions on dir: /home
	I0803 23:03:13.893334  331973 main.go:141] libmachine: (addons-033173) DBG | Skipping /home - not owner
	I0803 23:03:13.893372  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:03:13.893410  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:03:13.893421  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:03:13.893432  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:03:13.893439  331973 main.go:141] libmachine: (addons-033173) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:03:13.893447  331973 main.go:141] libmachine: (addons-033173) Creating domain...
	I0803 23:03:13.894619  331973 main.go:141] libmachine: (addons-033173) define libvirt domain using xml: 
	I0803 23:03:13.894648  331973 main.go:141] libmachine: (addons-033173) <domain type='kvm'>
	I0803 23:03:13.894667  331973 main.go:141] libmachine: (addons-033173)   <name>addons-033173</name>
	I0803 23:03:13.894681  331973 main.go:141] libmachine: (addons-033173)   <memory unit='MiB'>4000</memory>
	I0803 23:03:13.894691  331973 main.go:141] libmachine: (addons-033173)   <vcpu>2</vcpu>
	I0803 23:03:13.894695  331973 main.go:141] libmachine: (addons-033173)   <features>
	I0803 23:03:13.894701  331973 main.go:141] libmachine: (addons-033173)     <acpi/>
	I0803 23:03:13.894705  331973 main.go:141] libmachine: (addons-033173)     <apic/>
	I0803 23:03:13.894710  331973 main.go:141] libmachine: (addons-033173)     <pae/>
	I0803 23:03:13.894717  331973 main.go:141] libmachine: (addons-033173)     
	I0803 23:03:13.894722  331973 main.go:141] libmachine: (addons-033173)   </features>
	I0803 23:03:13.894740  331973 main.go:141] libmachine: (addons-033173)   <cpu mode='host-passthrough'>
	I0803 23:03:13.894752  331973 main.go:141] libmachine: (addons-033173)   
	I0803 23:03:13.894773  331973 main.go:141] libmachine: (addons-033173)   </cpu>
	I0803 23:03:13.894783  331973 main.go:141] libmachine: (addons-033173)   <os>
	I0803 23:03:13.894791  331973 main.go:141] libmachine: (addons-033173)     <type>hvm</type>
	I0803 23:03:13.894800  331973 main.go:141] libmachine: (addons-033173)     <boot dev='cdrom'/>
	I0803 23:03:13.894804  331973 main.go:141] libmachine: (addons-033173)     <boot dev='hd'/>
	I0803 23:03:13.894816  331973 main.go:141] libmachine: (addons-033173)     <bootmenu enable='no'/>
	I0803 23:03:13.894825  331973 main.go:141] libmachine: (addons-033173)   </os>
	I0803 23:03:13.894834  331973 main.go:141] libmachine: (addons-033173)   <devices>
	I0803 23:03:13.894849  331973 main.go:141] libmachine: (addons-033173)     <disk type='file' device='cdrom'>
	I0803 23:03:13.894862  331973 main.go:141] libmachine: (addons-033173)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/boot2docker.iso'/>
	I0803 23:03:13.894874  331973 main.go:141] libmachine: (addons-033173)       <target dev='hdc' bus='scsi'/>
	I0803 23:03:13.894884  331973 main.go:141] libmachine: (addons-033173)       <readonly/>
	I0803 23:03:13.894893  331973 main.go:141] libmachine: (addons-033173)     </disk>
	I0803 23:03:13.894901  331973 main.go:141] libmachine: (addons-033173)     <disk type='file' device='disk'>
	I0803 23:03:13.894912  331973 main.go:141] libmachine: (addons-033173)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:03:13.894928  331973 main.go:141] libmachine: (addons-033173)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/addons-033173.rawdisk'/>
	I0803 23:03:13.894939  331973 main.go:141] libmachine: (addons-033173)       <target dev='hda' bus='virtio'/>
	I0803 23:03:13.894947  331973 main.go:141] libmachine: (addons-033173)     </disk>
	I0803 23:03:13.894957  331973 main.go:141] libmachine: (addons-033173)     <interface type='network'>
	I0803 23:03:13.894966  331973 main.go:141] libmachine: (addons-033173)       <source network='mk-addons-033173'/>
	I0803 23:03:13.894976  331973 main.go:141] libmachine: (addons-033173)       <model type='virtio'/>
	I0803 23:03:13.894984  331973 main.go:141] libmachine: (addons-033173)     </interface>
	I0803 23:03:13.894992  331973 main.go:141] libmachine: (addons-033173)     <interface type='network'>
	I0803 23:03:13.895001  331973 main.go:141] libmachine: (addons-033173)       <source network='default'/>
	I0803 23:03:13.895012  331973 main.go:141] libmachine: (addons-033173)       <model type='virtio'/>
	I0803 23:03:13.895030  331973 main.go:141] libmachine: (addons-033173)     </interface>
	I0803 23:03:13.895040  331973 main.go:141] libmachine: (addons-033173)     <serial type='pty'>
	I0803 23:03:13.895051  331973 main.go:141] libmachine: (addons-033173)       <target port='0'/>
	I0803 23:03:13.895060  331973 main.go:141] libmachine: (addons-033173)     </serial>
	I0803 23:03:13.895072  331973 main.go:141] libmachine: (addons-033173)     <console type='pty'>
	I0803 23:03:13.895085  331973 main.go:141] libmachine: (addons-033173)       <target type='serial' port='0'/>
	I0803 23:03:13.895096  331973 main.go:141] libmachine: (addons-033173)     </console>
	I0803 23:03:13.895108  331973 main.go:141] libmachine: (addons-033173)     <rng model='virtio'>
	I0803 23:03:13.895121  331973 main.go:141] libmachine: (addons-033173)       <backend model='random'>/dev/random</backend>
	I0803 23:03:13.895130  331973 main.go:141] libmachine: (addons-033173)     </rng>
	I0803 23:03:13.895138  331973 main.go:141] libmachine: (addons-033173)     
	I0803 23:03:13.895147  331973 main.go:141] libmachine: (addons-033173)     
	I0803 23:03:13.895155  331973 main.go:141] libmachine: (addons-033173)   </devices>
	I0803 23:03:13.895163  331973 main.go:141] libmachine: (addons-033173) </domain>
	I0803 23:03:13.895171  331973 main.go:141] libmachine: (addons-033173) 
	I0803 23:03:13.899926  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:83:41:34 in network default
	I0803 23:03:13.900564  331973 main.go:141] libmachine: (addons-033173) Ensuring networks are active...
	I0803 23:03:13.900587  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:13.901306  331973 main.go:141] libmachine: (addons-033173) Ensuring network default is active
	I0803 23:03:13.901663  331973 main.go:141] libmachine: (addons-033173) Ensuring network mk-addons-033173 is active
	I0803 23:03:13.902241  331973 main.go:141] libmachine: (addons-033173) Getting domain xml...
	I0803 23:03:13.903015  331973 main.go:141] libmachine: (addons-033173) Creating domain...
	I0803 23:03:15.117949  331973 main.go:141] libmachine: (addons-033173) Waiting to get IP...
	I0803 23:03:15.118671  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:15.119065  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:15.119100  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:15.119054  331995 retry.go:31] will retry after 248.031155ms: waiting for machine to come up
	I0803 23:03:15.368694  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:15.369148  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:15.369178  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:15.369088  331995 retry.go:31] will retry after 352.952651ms: waiting for machine to come up
	I0803 23:03:15.723749  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:15.724139  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:15.724167  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:15.724088  331995 retry.go:31] will retry after 347.757434ms: waiting for machine to come up
	I0803 23:03:16.073827  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:16.074266  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:16.074295  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:16.074217  331995 retry.go:31] will retry after 407.702757ms: waiting for machine to come up
	I0803 23:03:16.484019  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:16.484467  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:16.484528  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:16.484433  331995 retry.go:31] will retry after 634.618076ms: waiting for machine to come up
	I0803 23:03:17.120319  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:17.120809  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:17.120840  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:17.120746  331995 retry.go:31] will retry after 664.264074ms: waiting for machine to come up
	I0803 23:03:17.786704  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:17.787424  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:17.787457  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:17.787348  331995 retry.go:31] will retry after 1.152641891s: waiting for machine to come up
	I0803 23:03:18.941322  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:18.941803  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:18.941830  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:18.941758  331995 retry.go:31] will retry after 1.01564361s: waiting for machine to come up
	I0803 23:03:19.959073  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:19.959544  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:19.959575  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:19.959477  331995 retry.go:31] will retry after 1.628485704s: waiting for machine to come up
	I0803 23:03:21.590490  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:21.591011  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:21.591036  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:21.590958  331995 retry.go:31] will retry after 2.012438941s: waiting for machine to come up
	I0803 23:03:23.605446  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:23.605913  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:23.605939  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:23.605874  331995 retry.go:31] will retry after 2.54955195s: waiting for machine to come up
	I0803 23:03:26.158501  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:26.158923  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:26.158955  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:26.158860  331995 retry.go:31] will retry after 2.509870991s: waiting for machine to come up
	I0803 23:03:28.670049  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:28.670525  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:28.670555  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:28.670471  331995 retry.go:31] will retry after 3.650888556s: waiting for machine to come up
	I0803 23:03:32.325284  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:32.325605  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find current IP address of domain addons-033173 in network mk-addons-033173
	I0803 23:03:32.325629  331973 main.go:141] libmachine: (addons-033173) DBG | I0803 23:03:32.325555  331995 retry.go:31] will retry after 4.690136019s: waiting for machine to come up
	I0803 23:03:37.020370  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.020862  331973 main.go:141] libmachine: (addons-033173) Found IP for machine: 192.168.39.243
	I0803 23:03:37.020890  331973 main.go:141] libmachine: (addons-033173) Reserving static IP address...
	I0803 23:03:37.020924  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has current primary IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.021265  331973 main.go:141] libmachine: (addons-033173) DBG | unable to find host DHCP lease matching {name: "addons-033173", mac: "52:54:00:3e:69:02", ip: "192.168.39.243"} in network mk-addons-033173
	I0803 23:03:37.100858  331973 main.go:141] libmachine: (addons-033173) DBG | Getting to WaitForSSH function...
	I0803 23:03:37.100887  331973 main.go:141] libmachine: (addons-033173) Reserved static IP address: 192.168.39.243
	I0803 23:03:37.100901  331973 main.go:141] libmachine: (addons-033173) Waiting for SSH to be available...
	I0803 23:03:37.103899  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.104432  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.104459  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.104574  331973 main.go:141] libmachine: (addons-033173) DBG | Using SSH client type: external
	I0803 23:03:37.104601  331973 main.go:141] libmachine: (addons-033173) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa (-rw-------)
	I0803 23:03:37.104644  331973 main.go:141] libmachine: (addons-033173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:03:37.104661  331973 main.go:141] libmachine: (addons-033173) DBG | About to run SSH command:
	I0803 23:03:37.104675  331973 main.go:141] libmachine: (addons-033173) DBG | exit 0
	I0803 23:03:37.233989  331973 main.go:141] libmachine: (addons-033173) DBG | SSH cmd err, output: <nil>: 
	I0803 23:03:37.234269  331973 main.go:141] libmachine: (addons-033173) KVM machine creation complete!
	I0803 23:03:37.234666  331973 main.go:141] libmachine: (addons-033173) Calling .GetConfigRaw
	I0803 23:03:37.235246  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:37.235507  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:37.235748  331973 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:03:37.235766  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:03:37.237360  331973 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:03:37.237378  331973 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:03:37.237386  331973 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:03:37.237396  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.239920  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.240237  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.240257  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.240411  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:37.240614  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.240767  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.240895  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:37.241060  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:37.241262  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:37.241273  331973 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:03:37.352950  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:03:37.352977  331973 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:03:37.352985  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.356041  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.356460  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.356508  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.356666  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:37.356870  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.357030  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.357184  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:37.357365  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:37.357604  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:37.357620  331973 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:03:37.470823  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:03:37.470957  331973 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:03:37.470973  331973 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:03:37.470984  331973 main.go:141] libmachine: (addons-033173) Calling .GetMachineName
	I0803 23:03:37.471263  331973 buildroot.go:166] provisioning hostname "addons-033173"
	I0803 23:03:37.471299  331973 main.go:141] libmachine: (addons-033173) Calling .GetMachineName
	I0803 23:03:37.471549  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.474284  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.474642  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.474667  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.474836  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:37.475019  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.475179  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.475315  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:37.475472  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:37.475660  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:37.475672  331973 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-033173 && echo "addons-033173" | sudo tee /etc/hostname
	I0803 23:03:37.600377  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-033173
	
	I0803 23:03:37.600409  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.603205  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.603490  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.603521  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.603691  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:37.603897  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.604064  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.604198  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:37.604375  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:37.604587  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:37.604603  331973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-033173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-033173/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-033173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:03:37.727023  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:03:37.727060  331973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:03:37.727098  331973 buildroot.go:174] setting up certificates
	I0803 23:03:37.727113  331973 provision.go:84] configureAuth start
	I0803 23:03:37.727131  331973 main.go:141] libmachine: (addons-033173) Calling .GetMachineName
	I0803 23:03:37.727461  331973 main.go:141] libmachine: (addons-033173) Calling .GetIP
	I0803 23:03:37.730415  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.730843  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.730876  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.731013  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.733038  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.733316  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.733344  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.733470  331973 provision.go:143] copyHostCerts
	I0803 23:03:37.733573  331973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:03:37.733701  331973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:03:37.733768  331973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:03:37.733826  331973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.addons-033173 san=[127.0.0.1 192.168.39.243 addons-033173 localhost minikube]
	I0803 23:03:37.871163  331973 provision.go:177] copyRemoteCerts
	I0803 23:03:37.871236  331973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:03:37.871278  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:37.874177  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.874484  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:37.874514  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:37.874707  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:37.874929  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:37.875121  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:37.875391  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:03:37.964477  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:03:37.993685  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:03:38.021957  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0803 23:03:38.050644  331973 provision.go:87] duration metric: took 323.512847ms to configureAuth
	I0803 23:03:38.050680  331973 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:03:38.050904  331973 config.go:182] Loaded profile config "addons-033173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:03:38.051081  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:38.054187  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.054565  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.054589  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.054775  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:38.054966  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.055141  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.055264  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:38.055401  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:38.055582  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:38.055597  331973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:03:38.351739  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:03:38.351787  331973 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:03:38.351796  331973 main.go:141] libmachine: (addons-033173) Calling .GetURL
	I0803 23:03:38.353077  331973 main.go:141] libmachine: (addons-033173) DBG | Using libvirt version 6000000
	I0803 23:03:38.356079  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.356504  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.356536  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.356690  331973 main.go:141] libmachine: Docker is up and running!
	I0803 23:03:38.356707  331973 main.go:141] libmachine: Reticulating splines...
	I0803 23:03:38.356716  331973 client.go:171] duration metric: took 25.23299847s to LocalClient.Create
	I0803 23:03:38.356745  331973 start.go:167] duration metric: took 25.233075562s to libmachine.API.Create "addons-033173"
	I0803 23:03:38.356757  331973 start.go:293] postStartSetup for "addons-033173" (driver="kvm2")
	I0803 23:03:38.356768  331973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:03:38.356786  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:38.357050  331973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:03:38.357072  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:38.359215  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.359530  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.359561  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.359689  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:38.359869  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.360013  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:38.360113  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:03:38.448653  331973 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:03:38.453183  331973 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:03:38.453220  331973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:03:38.453317  331973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:03:38.453348  331973 start.go:296] duration metric: took 96.585916ms for postStartSetup
	I0803 23:03:38.453399  331973 main.go:141] libmachine: (addons-033173) Calling .GetConfigRaw
	I0803 23:03:38.453996  331973 main.go:141] libmachine: (addons-033173) Calling .GetIP
	I0803 23:03:38.456632  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.456953  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.456985  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.457227  331973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/config.json ...
	I0803 23:03:38.457408  331973 start.go:128] duration metric: took 25.353300947s to createHost
	I0803 23:03:38.457431  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:38.459762  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.460114  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.460138  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.460330  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:38.460533  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.460687  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.460827  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:38.460976  331973 main.go:141] libmachine: Using SSH client type: native
	I0803 23:03:38.461133  331973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0803 23:03:38.461143  331973 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0803 23:03:38.574570  331973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722726218.550705359
	
	I0803 23:03:38.574596  331973 fix.go:216] guest clock: 1722726218.550705359
	I0803 23:03:38.574605  331973 fix.go:229] Guest: 2024-08-03 23:03:38.550705359 +0000 UTC Remote: 2024-08-03 23:03:38.457419484 +0000 UTC m=+25.463736524 (delta=93.285875ms)
	I0803 23:03:38.574627  331973 fix.go:200] guest clock delta is within tolerance: 93.285875ms
	I0803 23:03:38.574632  331973 start.go:83] releasing machines lock for "addons-033173", held for 25.470615624s
	I0803 23:03:38.574656  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:38.574963  331973 main.go:141] libmachine: (addons-033173) Calling .GetIP
	I0803 23:03:38.577720  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.578126  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.578147  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.578315  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:38.578834  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:38.579021  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:03:38.579139  331973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:03:38.579192  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:38.579207  331973 ssh_runner.go:195] Run: cat /version.json
	I0803 23:03:38.579227  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:03:38.581698  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.581993  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.582081  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.582111  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.582329  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:38.582346  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:38.582351  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:38.582527  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:03:38.582546  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.582678  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:38.582681  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:03:38.582861  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:03:38.582871  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:03:38.583028  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:03:38.681464  331973 ssh_runner.go:195] Run: systemctl --version
	I0803 23:03:38.687870  331973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:03:38.853778  331973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:03:38.860568  331973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:03:38.860794  331973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:03:38.877312  331973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:03:38.877338  331973 start.go:495] detecting cgroup driver to use...
	I0803 23:03:38.877416  331973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:03:38.893465  331973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:03:38.907992  331973 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:03:38.908054  331973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:03:38.921985  331973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:03:38.935926  331973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:03:39.045602  331973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:03:39.203745  331973 docker.go:233] disabling docker service ...
	I0803 23:03:39.203832  331973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:03:39.218885  331973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:03:39.231884  331973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:03:39.354197  331973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:03:39.479932  331973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:03:39.494460  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:03:39.513941  331973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:03:39.514010  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.524960  331973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:03:39.525037  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.536418  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.547384  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.558798  331973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:03:39.570002  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.581200  331973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.599701  331973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:03:39.610703  331973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:03:39.620573  331973 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:03:39.620637  331973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:03:39.634423  331973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:03:39.644673  331973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:03:39.764470  331973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:03:39.903177  331973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:03:39.903280  331973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:03:39.908598  331973 start.go:563] Will wait 60s for crictl version
	I0803 23:03:39.908686  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:03:39.912426  331973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:03:39.952642  331973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:03:39.952763  331973 ssh_runner.go:195] Run: crio --version
	I0803 23:03:39.980274  331973 ssh_runner.go:195] Run: crio --version
	I0803 23:03:40.010805  331973 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:03:40.012103  331973 main.go:141] libmachine: (addons-033173) Calling .GetIP
	I0803 23:03:40.014901  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:40.015226  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:03:40.015251  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:03:40.015446  331973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:03:40.019717  331973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:03:40.032287  331973 kubeadm.go:883] updating cluster {Name:addons-033173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-033173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:03:40.032437  331973 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:03:40.032509  331973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:03:40.072275  331973 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 23:03:40.072376  331973 ssh_runner.go:195] Run: which lz4
	I0803 23:03:40.076453  331973 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0803 23:03:40.080837  331973 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:03:40.080880  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 23:03:41.530811  331973 crio.go:462] duration metric: took 1.454396109s to copy over tarball
	I0803 23:03:41.530903  331973 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:03:43.773333  331973 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.242388131s)
	I0803 23:03:43.773367  331973 crio.go:469] duration metric: took 2.242518672s to extract the tarball
	I0803 23:03:43.773378  331973 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:03:43.816237  331973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:03:43.860082  331973 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:03:43.860115  331973 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:03:43.860126  331973 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.30.3 crio true true} ...
	I0803 23:03:43.860264  331973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-033173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-033173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:03:43.860336  331973 ssh_runner.go:195] Run: crio config
	I0803 23:03:43.904817  331973 cni.go:84] Creating CNI manager for ""
	I0803 23:03:43.904842  331973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:03:43.904855  331973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:03:43.904887  331973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-033173 NodeName:addons-033173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:03:43.905050  331973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-033173"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:03:43.905128  331973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:03:43.915467  331973 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:03:43.915575  331973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:03:43.925652  331973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0803 23:03:43.942586  331973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:03:43.959335  331973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0803 23:03:43.976337  331973 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0803 23:03:43.980369  331973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:03:43.993765  331973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:03:44.118063  331973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:03:44.136297  331973 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173 for IP: 192.168.39.243
	I0803 23:03:44.136333  331973 certs.go:194] generating shared ca certs ...
	I0803 23:03:44.136376  331973 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.136580  331973 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:03:44.198052  331973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt ...
	I0803 23:03:44.198087  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt: {Name:mk7080f46d6f9d3e419b626dbb32f8a0b4118bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.198290  331973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key ...
	I0803 23:03:44.198308  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key: {Name:mk610c12ad2bd22f297dfc538d1b53b124cb527c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.198417  331973 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:03:44.405211  331973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt ...
	I0803 23:03:44.405246  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt: {Name:mkb3f88ac1974d3eb95a825cd9fba2caa49d55ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.405450  331973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key ...
	I0803 23:03:44.405468  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key: {Name:mk0c4429b799e55ba169809f7b00eccc9309faa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.405596  331973 certs.go:256] generating profile certs ...
	I0803 23:03:44.405690  331973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.key
	I0803 23:03:44.405710  331973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.crt with IP's: []
	I0803 23:03:44.485138  331973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.crt ...
	I0803 23:03:44.485180  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.crt: {Name:mk92fbd9fc7d956c300cad61b95e3b27033a18d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.485371  331973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.key ...
	I0803 23:03:44.485387  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/client.key: {Name:mk81f6717ad3f7179ccf8eda214953a3baa235ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.485483  331973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key.99ea163b
	I0803 23:03:44.485530  331973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt.99ea163b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243]
	I0803 23:03:44.966273  331973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt.99ea163b ...
	I0803 23:03:44.966313  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt.99ea163b: {Name:mkdd23f093841537a4b198d7591c5ea147912949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.966501  331973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key.99ea163b ...
	I0803 23:03:44.966516  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key.99ea163b: {Name:mk349e1fe64f4b0df237b7b0f84fb4848913c050 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:44.966590  331973 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt.99ea163b -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt
	I0803 23:03:44.966662  331973 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key.99ea163b -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key
	I0803 23:03:44.966707  331973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.key
	I0803 23:03:44.966725  331973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.crt with IP's: []
	I0803 23:03:45.100692  331973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.crt ...
	I0803 23:03:45.100741  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.crt: {Name:mk6ca4d819a8096a921fadb327f8a8f37b6eef1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:45.100926  331973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.key ...
	I0803 23:03:45.100940  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.key: {Name:mk7b0442d9ae1017ad21cd99846f923d6c152306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:45.101122  331973 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:03:45.101158  331973 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:03:45.101181  331973 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:03:45.101204  331973 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:03:45.101882  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:03:45.135744  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:03:45.160406  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:03:45.188884  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:03:45.217233  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0803 23:03:45.243784  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:03:45.273195  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:03:45.304208  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/addons-033173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0803 23:03:45.335452  331973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:03:45.363124  331973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:03:45.381826  331973 ssh_runner.go:195] Run: openssl version
	I0803 23:03:45.387838  331973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:03:45.399633  331973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:03:45.406087  331973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:03:45.406168  331973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:03:45.413056  331973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:03:45.428969  331973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:03:45.434630  331973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:03:45.434707  331973 kubeadm.go:392] StartCluster: {Name:addons-033173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-033173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:03:45.434814  331973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:03:45.434869  331973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:03:45.474250  331973 cri.go:89] found id: ""
	I0803 23:03:45.474332  331973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:03:45.484490  331973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:03:45.494993  331973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:03:45.504994  331973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:03:45.505023  331973 kubeadm.go:157] found existing configuration files:
	
	I0803 23:03:45.505081  331973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:03:45.514832  331973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:03:45.514922  331973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:03:45.525194  331973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:03:45.535257  331973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:03:45.535320  331973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:03:45.545816  331973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:03:45.555432  331973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:03:45.555498  331973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:03:45.565394  331973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:03:45.574855  331973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:03:45.574918  331973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:03:45.584536  331973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:03:45.645533  331973 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 23:03:45.645611  331973 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 23:03:45.807357  331973 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 23:03:45.807509  331973 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 23:03:45.807681  331973 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 23:03:46.041415  331973 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 23:03:46.133830  331973 out.go:204]   - Generating certificates and keys ...
	I0803 23:03:46.133974  331973 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 23:03:46.134051  331973 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 23:03:46.138740  331973 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 23:03:46.598753  331973 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 23:03:46.735642  331973 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 23:03:46.833418  331973 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 23:03:46.972557  331973 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 23:03:46.974615  331973 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-033173 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0803 23:03:47.111835  331973 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 23:03:47.112061  331973 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-033173 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0803 23:03:47.521005  331973 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 23:03:47.602062  331973 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 23:03:47.785843  331973 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 23:03:47.785913  331973 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 23:03:47.930230  331973 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 23:03:48.150864  331973 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 23:03:48.210957  331973 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 23:03:48.315723  331973 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 23:03:48.417388  331973 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 23:03:48.418068  331973 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 23:03:48.420524  331973 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 23:03:48.422285  331973 out.go:204]   - Booting up control plane ...
	I0803 23:03:48.422373  331973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 23:03:48.422458  331973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 23:03:48.422539  331973 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 23:03:48.441893  331973 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 23:03:48.442718  331973 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 23:03:48.442798  331973 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 23:03:48.578226  331973 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 23:03:48.578346  331973 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 23:03:49.078387  331973 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.947146ms
	I0803 23:03:49.078537  331973 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 23:03:54.078099  331973 kubeadm.go:310] [api-check] The API server is healthy after 5.001172246s
	I0803 23:03:54.098065  331973 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 23:03:54.121764  331973 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 23:03:54.158554  331973 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 23:03:54.158826  331973 kubeadm.go:310] [mark-control-plane] Marking the node addons-033173 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 23:03:54.178146  331973 kubeadm.go:310] [bootstrap-token] Using token: 9fckoe.tz3h3dosl4ain5mz
	I0803 23:03:54.179551  331973 out.go:204]   - Configuring RBAC rules ...
	I0803 23:03:54.179692  331973 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 23:03:54.192030  331973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 23:03:54.202877  331973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 23:03:54.211824  331973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 23:03:54.215777  331973 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 23:03:54.219956  331973 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 23:03:54.483470  331973 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 23:03:54.977098  331973 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 23:03:55.485937  331973 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 23:03:55.485964  331973 kubeadm.go:310] 
	I0803 23:03:55.486032  331973 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 23:03:55.486074  331973 kubeadm.go:310] 
	I0803 23:03:55.486197  331973 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 23:03:55.486209  331973 kubeadm.go:310] 
	I0803 23:03:55.486255  331973 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 23:03:55.486327  331973 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 23:03:55.486395  331973 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 23:03:55.486405  331973 kubeadm.go:310] 
	I0803 23:03:55.486446  331973 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 23:03:55.486452  331973 kubeadm.go:310] 
	I0803 23:03:55.486488  331973 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 23:03:55.486494  331973 kubeadm.go:310] 
	I0803 23:03:55.486570  331973 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 23:03:55.486668  331973 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 23:03:55.486773  331973 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 23:03:55.486783  331973 kubeadm.go:310] 
	I0803 23:03:55.486889  331973 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 23:03:55.486990  331973 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 23:03:55.487001  331973 kubeadm.go:310] 
	I0803 23:03:55.487110  331973 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9fckoe.tz3h3dosl4ain5mz \
	I0803 23:03:55.487255  331973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c \
	I0803 23:03:55.487287  331973 kubeadm.go:310] 	--control-plane 
	I0803 23:03:55.487296  331973 kubeadm.go:310] 
	I0803 23:03:55.487413  331973 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 23:03:55.487425  331973 kubeadm.go:310] 
	I0803 23:03:55.487557  331973 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9fckoe.tz3h3dosl4ain5mz \
	I0803 23:03:55.487703  331973 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c 
	I0803 23:03:55.487866  331973 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 23:03:55.487900  331973 cni.go:84] Creating CNI manager for ""
	I0803 23:03:55.487927  331973 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:03:55.489623  331973 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0803 23:03:55.490640  331973 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0803 23:03:55.506635  331973 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0803 23:03:55.526503  331973 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:03:55.526684  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:55.526693  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-033173 minikube.k8s.io/updated_at=2024_08_03T23_03_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=addons-033173 minikube.k8s.io/primary=true
	I0803 23:03:55.566145  331973 ops.go:34] apiserver oom_adj: -16
	I0803 23:03:55.675889  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:56.176170  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:56.676669  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:57.176937  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:57.676213  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:58.176546  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:58.676741  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:59.176801  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:03:59.675943  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:00.176038  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:00.675913  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:01.176995  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:01.676224  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:02.176503  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:02.676290  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:03.176337  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:03.676480  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:04.176336  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:04.676923  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:05.176053  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:05.676555  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:06.176832  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:06.675969  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:07.176856  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:07.676124  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:08.176990  331973 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:04:08.286706  331973 kubeadm.go:1113] duration metric: took 12.760112558s to wait for elevateKubeSystemPrivileges
	I0803 23:04:08.286754  331973 kubeadm.go:394] duration metric: took 22.852055643s to StartCluster
	I0803 23:04:08.286782  331973 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:04:08.286949  331973 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:04:08.287461  331973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:04:08.287747  331973 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:04:08.287772  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 23:04:08.287875  331973 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0803 23:04:08.288002  331973 config.go:182] Loaded profile config "addons-033173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:04:08.288019  331973 addons.go:69] Setting inspektor-gadget=true in profile "addons-033173"
	I0803 23:04:08.288032  331973 addons.go:69] Setting default-storageclass=true in profile "addons-033173"
	I0803 23:04:08.288023  331973 addons.go:69] Setting ingress-dns=true in profile "addons-033173"
	I0803 23:04:08.288010  331973 addons.go:69] Setting yakd=true in profile "addons-033173"
	I0803 23:04:08.288060  331973 addons.go:69] Setting metrics-server=true in profile "addons-033173"
	I0803 23:04:08.288065  331973 addons.go:234] Setting addon ingress-dns=true in "addons-033173"
	I0803 23:04:08.288057  331973 addons.go:69] Setting cloud-spanner=true in profile "addons-033173"
	I0803 23:04:08.288070  331973 addons.go:69] Setting helm-tiller=true in profile "addons-033173"
	I0803 23:04:08.288084  331973 addons.go:234] Setting addon metrics-server=true in "addons-033173"
	I0803 23:04:08.288102  331973 addons.go:234] Setting addon helm-tiller=true in "addons-033173"
	I0803 23:04:08.288136  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288156  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288062  331973 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-033173"
	I0803 23:04:08.288083  331973 addons.go:234] Setting addon yakd=true in "addons-033173"
	I0803 23:04:08.288223  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288136  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288090  331973 addons.go:69] Setting storage-provisioner=true in profile "addons-033173"
	I0803 23:04:08.288561  331973 addons.go:234] Setting addon storage-provisioner=true in "addons-033173"
	I0803 23:04:08.288056  331973 addons.go:234] Setting addon inspektor-gadget=true in "addons-033173"
	I0803 23:04:08.288589  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288596  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288599  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288612  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288619  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288636  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288671  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288697  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288708  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288720  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288712  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288080  331973 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-033173"
	I0803 23:04:08.288860  331973 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-033173"
	I0803 23:04:08.288901  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.288948  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.288996  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.289001  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.289022  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288097  331973 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-033173"
	I0803 23:04:08.288727  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288098  331973 addons.go:69] Setting registry=true in profile "addons-033173"
	I0803 23:04:08.289231  331973 addons.go:234] Setting addon registry=true in "addons-033173"
	I0803 23:04:08.289271  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.289412  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.289457  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.289675  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.289705  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.288104  331973 addons.go:69] Setting ingress=true in profile "addons-033173"
	I0803 23:04:08.289909  331973 addons.go:234] Setting addon ingress=true in "addons-033173"
	I0803 23:04:08.289956  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.290320  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.290346  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.297680  331973 out.go:177] * Verifying Kubernetes components...
	I0803 23:04:08.288111  331973 addons.go:69] Setting volcano=true in profile "addons-033173"
	I0803 23:04:08.298442  331973 addons.go:234] Setting addon volcano=true in "addons-033173"
	I0803 23:04:08.288112  331973 addons.go:69] Setting volumesnapshots=true in profile "addons-033173"
	I0803 23:04:08.298505  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.298541  331973 addons.go:234] Setting addon volumesnapshots=true in "addons-033173"
	I0803 23:04:08.288105  331973 addons.go:69] Setting gcp-auth=true in profile "addons-033173"
	I0803 23:04:08.298586  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.298589  331973 mustload.go:65] Loading cluster: addons-033173
	I0803 23:04:08.288117  331973 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-033173"
	I0803 23:04:08.298917  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.298931  331973 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-033173"
	I0803 23:04:08.298969  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.299089  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.299128  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.289127  331973 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-033173"
	I0803 23:04:08.299304  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.299351  331973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:04:08.288100  331973 addons.go:234] Setting addon cloud-spanner=true in "addons-033173"
	I0803 23:04:08.299530  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.299853  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.299882  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.300004  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.300107  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.311784  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0803 23:04:08.312266  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.312810  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.312834  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.313211  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.313826  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.313862  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.315856  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35927
	I0803 23:04:08.316415  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.316921  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.316946  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.317294  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.318032  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.318086  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.319945  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0803 23:04:08.320614  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.321257  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.321286  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.321681  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.322237  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.322266  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.323995  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I0803 23:04:08.324576  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.325137  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.325154  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.325601  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.325712  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0803 23:04:08.326570  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.326595  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.326752  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.326854  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
	I0803 23:04:08.327268  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.327291  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.327616  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.329813  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.329857  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.330086  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.330262  331973 config.go:182] Loaded profile config "addons-033173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:04:08.330610  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.330650  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.330983  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39391
	I0803 23:04:08.331072  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0803 23:04:08.331140  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0803 23:04:08.331293  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.332104  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.332215  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.332271  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.332325  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.341470  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.341499  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.341707  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.341720  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.341846  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.341857  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.341980  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.341991  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.342692  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.342745  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.342757  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.342800  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.343615  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.343646  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.343841  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0803 23:04:08.343927  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.343967  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.344182  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.344448  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.344494  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.344693  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.345226  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.345245  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.345671  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.345937  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.348388  331973 addons.go:234] Setting addon default-storageclass=true in "addons-033173"
	I0803 23:04:08.348435  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.348794  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.348828  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.350540  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.351793  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0803 23:04:08.351996  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0803 23:04:08.352317  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.352515  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.352622  331973 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0803 23:04:08.353102  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.353124  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.353858  331973 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0803 23:04:08.353876  331973 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0803 23:04:08.353897  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.353934  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.353947  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.354123  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.354320  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.354460  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.354585  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.356782  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.357495  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.358366  331973 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:04:08.358371  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.358823  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.358849  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.359193  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.359399  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.359577  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.359711  331973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:04:08.359730  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:04:08.359751  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.359745  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.359812  331973 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0803 23:04:08.360771  331973 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0803 23:04:08.360787  331973 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0803 23:04:08.360804  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.363470  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.364382  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.364424  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.364439  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.364439  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.364766  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.364854  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.364932  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.364939  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.365135  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.365449  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.365682  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.365843  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.366056  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.368526  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0803 23:04:08.369053  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.369576  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.369601  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.369964  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.370095  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.371268  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
	I0803 23:04:08.371954  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.372293  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.372699  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.372742  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.373070  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.373090  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.373472  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.374182  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.374209  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.377257  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32917
	I0803 23:04:08.377838  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.378466  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.378490  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.378891  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.382767  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.382797  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.384000  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0803 23:04:08.384603  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.385244  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.385262  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.385712  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.386353  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.386394  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.387799  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40971
	I0803 23:04:08.387945  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0803 23:04:08.388512  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.389117  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.389135  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.389603  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.390225  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.390264  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.391435  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0803 23:04:08.392031  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.392111  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0803 23:04:08.392618  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.392826  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.392848  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.393214  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.393351  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.393361  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.393694  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.393708  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.393884  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.394589  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.394612  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.394670  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.396173  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.396457  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0803 23:04:08.396916  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.397603  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.397660  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.397699  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.398347  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.398545  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.398563  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.398950  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.399165  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.400251  331973 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0803 23:04:08.400310  331973 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0803 23:04:08.401597  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0803 23:04:08.401731  331973 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 23:04:08.401739  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.401749  331973 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 23:04:08.401772  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.401966  331973 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 23:04:08.401988  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0803 23:04:08.402003  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.404098  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I0803 23:04:08.404261  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.404420  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0803 23:04:08.405348  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.405370  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.405514  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.405920  331973 out.go:177]   - Using image docker.io/registry:2.8.3
	I0803 23:04:08.406890  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.406892  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.406982  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.406945  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.406985  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.407048  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.407523  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.407544  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.407603  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.407640  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.407836  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.408048  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.408141  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.408349  331973 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0803 23:04:08.408902  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
	I0803 23:04:08.409353  331973 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0803 23:04:08.409372  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0803 23:04:08.409391  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.409401  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.409573  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.409586  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.409729  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.410286  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.410302  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.410713  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.410907  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.411771  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.411956  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.412926  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.412972  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.413532  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.414193  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.414216  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.414733  331973 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0803 23:04:08.415528  331973 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-033173"
	I0803 23:04:08.415576  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:08.415932  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.415955  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.416234  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.416264  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.416266  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.416280  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.416231  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.416446  331973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0803 23:04:08.416460  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0803 23:04:08.416481  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.416499  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.416540  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.416659  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.416705  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.416814  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.417107  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.419878  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.420364  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.420400  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.420622  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.420857  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.421033  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.421170  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.429631  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I0803 23:04:08.429848  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0803 23:04:08.430189  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.430212  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.430912  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.430945  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.430912  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.431002  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.431285  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34955
	I0803 23:04:08.431475  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.431922  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.431957  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.432524  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.432549  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.432832  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0803 23:04:08.433335  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.433577  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.433755  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.433880  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.434085  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.434108  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.434143  331973 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:04:08.434165  331973 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:04:08.434185  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.434452  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.434774  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.435284  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.435668  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.437115  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.437676  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.438445  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.438647  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.438974  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.438980  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:08.438995  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:08.438995  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.439043  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.439070  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0803 23:04:08.440413  331973 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0803 23:04:08.440697  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:08.440705  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0803 23:04:08.440733  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:08.440748  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:08.440757  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:08.440764  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:08.440817  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.440966  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.441039  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.441159  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.441641  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:08.441655  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0803 23:04:08.441736  331973 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0803 23:04:08.441872  331973 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0803 23:04:08.441889  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0803 23:04:08.441908  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.443107  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0803 23:04:08.443551  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.444123  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.444151  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.444672  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0803 23:04:08.444830  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.445025  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.445098  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.445365  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0803 23:04:08.445749  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0803 23:04:08.445830  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.445846  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.445865  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.445995  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.446013  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.446100  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.446121  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.446221  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.446305  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.446323  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.446350  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.446495  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.446522  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.446679  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.446745  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.446963  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.447560  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.447641  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.447671  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0803 23:04:08.448187  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.448526  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.448819  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.449234  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:08.449272  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:08.449321  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.449706  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0803 23:04:08.449708  331973 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0803 23:04:08.449768  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0803 23:04:08.451663  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0803 23:04:08.451680  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0803 23:04:08.451694  331973 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0803 23:04:08.451694  331973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 23:04:08.451714  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.451767  331973 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 23:04:08.451777  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0803 23:04:08.451791  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.453844  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0803 23:04:08.454838  331973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 23:04:08.455356  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.455747  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.455820  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0803 23:04:08.455782  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.455937  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.455969  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.456201  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.456325  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.456379  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.456435  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.456466  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.456820  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.456872  331973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0803 23:04:08.456987  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.457361  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.457492  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.458496  331973 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 23:04:08.458513  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0803 23:04:08.458531  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.458974  331973 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0803 23:04:08.460292  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0803 23:04:08.460311  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0803 23:04:08.460328  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.462009  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.462409  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.462430  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.462613  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.462808  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.462974  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.463126  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.464492  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.464985  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.465005  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.465196  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.465373  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.465545  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.465694  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:08.485383  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0803 23:04:08.485838  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:08.486349  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:08.486375  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:08.486749  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:08.486971  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:08.488688  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:08.490561  331973 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0803 23:04:08.492182  331973 out.go:177]   - Using image docker.io/busybox:stable
	I0803 23:04:08.493384  331973 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 23:04:08.493400  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0803 23:04:08.493422  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:08.496729  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.497088  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:08.497120  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:08.497339  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:08.497565  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:08.497747  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:08.497931  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	W0803 23:04:08.498739  331973 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40962->192.168.39.243:22: read: connection reset by peer
	I0803 23:04:08.498766  331973 retry.go:31] will retry after 166.922898ms: ssh: handshake failed: read tcp 192.168.39.1:40962->192.168.39.243:22: read: connection reset by peer
	I0803 23:04:08.798850  331973 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0803 23:04:08.798877  331973 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0803 23:04:08.911702  331973 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0803 23:04:08.911733  331973 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0803 23:04:08.946427  331973 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0803 23:04:08.946450  331973 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0803 23:04:08.949298  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:04:08.983281  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 23:04:09.007129  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 23:04:09.010387  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0803 23:04:09.010410  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0803 23:04:09.013292  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0803 23:04:09.053813  331973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:04:09.053847  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 23:04:09.088685  331973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0803 23:04:09.088729  331973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0803 23:04:09.111140  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:04:09.113689  331973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 23:04:09.113718  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0803 23:04:09.119841  331973 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0803 23:04:09.119870  331973 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0803 23:04:09.137636  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 23:04:09.142405  331973 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0803 23:04:09.142422  331973 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0803 23:04:09.173778  331973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0803 23:04:09.173821  331973 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0803 23:04:09.231357  331973 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0803 23:04:09.231380  331973 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0803 23:04:09.266516  331973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0803 23:04:09.266547  331973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0803 23:04:09.278290  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 23:04:09.341595  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0803 23:04:09.341640  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0803 23:04:09.361218  331973 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0803 23:04:09.361241  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0803 23:04:09.371589  331973 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0803 23:04:09.371618  331973 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0803 23:04:09.380114  331973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 23:04:09.380140  331973 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 23:04:09.397016  331973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0803 23:04:09.397040  331973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0803 23:04:09.476037  331973 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 23:04:09.476066  331973 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0803 23:04:09.524787  331973 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0803 23:04:09.524814  331973 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0803 23:04:09.557584  331973 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:04:09.557615  331973 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 23:04:09.625318  331973 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0803 23:04:09.625344  331973 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0803 23:04:09.645050  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0803 23:04:09.645077  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0803 23:04:09.672452  331973 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0803 23:04:09.672481  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0803 23:04:09.682770  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0803 23:04:09.695339  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0803 23:04:09.695373  331973 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0803 23:04:09.727537  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0803 23:04:09.739361  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:04:09.772060  331973 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0803 23:04:09.772090  331973 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0803 23:04:09.868283  331973 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 23:04:09.868310  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0803 23:04:09.974919  331973 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 23:04:09.974949  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0803 23:04:10.018394  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0803 23:04:10.028374  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0803 23:04:10.028407  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0803 23:04:10.133920  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 23:04:10.316066  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 23:04:10.498481  331973 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0803 23:04:10.498525  331973 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0803 23:04:10.893849  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0803 23:04:10.893878  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0803 23:04:11.200074  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0803 23:04:11.200109  331973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0803 23:04:11.580295  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0803 23:04:11.580319  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0803 23:04:11.855361  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0803 23:04:11.855386  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0803 23:04:12.182343  331973 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 23:04:12.182373  331973 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0803 23:04:12.435657  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 23:04:13.527855  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.5785159s)
	I0803 23:04:13.527894  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.544576924s)
	I0803 23:04:13.527930  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:13.527937  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:13.527948  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:13.527950  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:13.528254  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:13.528252  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:13.528351  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:13.528358  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:13.528322  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:13.528375  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:13.528376  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:13.528384  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:13.528387  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:13.528449  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:13.528599  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:13.528714  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:13.528720  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:13.528695  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:13.528734  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:13.528698  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:15.432786  331973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0803 23:04:15.432830  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:15.436223  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:15.436671  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:15.436702  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:15.436937  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:15.437177  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:15.437352  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:15.437539  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:16.038465  331973 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0803 23:04:16.202113  331973 addons.go:234] Setting addon gcp-auth=true in "addons-033173"
	I0803 23:04:16.202175  331973 host.go:66] Checking if "addons-033173" exists ...
	I0803 23:04:16.202596  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:16.202631  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:16.218666  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0803 23:04:16.219152  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:16.219686  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:16.219712  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:16.220025  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:16.220488  331973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:04:16.220514  331973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:04:16.235945  331973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33447
	I0803 23:04:16.236454  331973 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:04:16.236948  331973 main.go:141] libmachine: Using API Version  1
	I0803 23:04:16.236975  331973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:04:16.237406  331973 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:04:16.237634  331973 main.go:141] libmachine: (addons-033173) Calling .GetState
	I0803 23:04:16.239613  331973 main.go:141] libmachine: (addons-033173) Calling .DriverName
	I0803 23:04:16.239880  331973 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0803 23:04:16.239913  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHHostname
	I0803 23:04:16.242491  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:16.242943  331973 main.go:141] libmachine: (addons-033173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:69:02", ip: ""} in network mk-addons-033173: {Iface:virbr1 ExpiryTime:2024-08-04 00:03:28 +0000 UTC Type:0 Mac:52:54:00:3e:69:02 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:addons-033173 Clientid:01:52:54:00:3e:69:02}
	I0803 23:04:16.242970  331973 main.go:141] libmachine: (addons-033173) DBG | domain addons-033173 has defined IP address 192.168.39.243 and MAC address 52:54:00:3e:69:02 in network mk-addons-033173
	I0803 23:04:16.243158  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHPort
	I0803 23:04:16.243355  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHKeyPath
	I0803 23:04:16.243509  331973 main.go:141] libmachine: (addons-033173) Calling .GetSSHUsername
	I0803 23:04:16.243627  331973 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/addons-033173/id_rsa Username:docker}
	I0803 23:04:17.355012  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.347838247s)
	I0803 23:04:17.355076  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355088  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355096  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.341777003s)
	I0803 23:04:17.355137  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355146  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355236  331973 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.301387318s)
	I0803 23:04:17.355293  331973 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.301421302s)
	I0803 23:04:17.355328  331973 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 23:04:17.355372  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.217712553s)
	I0803 23:04:17.355375  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.355394  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.355405  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355408  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355414  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355419  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355506  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.077188642s)
	I0803 23:04:17.355527  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355536  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355600  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.672802039s)
	I0803 23:04:17.355616  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355626  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355707  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.628140654s)
	I0803 23:04:17.355727  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355736  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355834  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.616441627s)
	I0803 23:04:17.355850  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355858  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355927  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.337502277s)
	I0803 23:04:17.355944  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.355952  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.356070  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.222117103s)
	W0803 23:04:17.356111  331973 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 23:04:17.356132  331973 retry.go:31] will retry after 299.615722ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 23:04:17.356216  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.040115199s)
	I0803 23:04:17.356237  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.356246  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.356262  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.356280  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.356288  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.356296  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.356305  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.356307  331973 node_ready.go:35] waiting up to 6m0s for node "addons-033173" to be "Ready" ...
	I0803 23:04:17.356330  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.356337  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.356344  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.356350  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.355335  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.244164128s)
	I0803 23:04:17.356417  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.356429  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.356656  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.356704  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.356712  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.356720  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.356728  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.356777  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.356798  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.356805  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.356813  331973 addons.go:475] Verifying addon ingress=true in "addons-033173"
	I0803 23:04:17.356945  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.357000  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.357016  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.357039  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.357046  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.357054  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.357061  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.357117  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.357138  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.357145  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.357153  331973 addons.go:475] Verifying addon metrics-server=true in "addons-033173"
	I0803 23:04:17.357195  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.357209  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.358668  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.358707  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.358715  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.358724  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.358734  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.359077  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.359109  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.359116  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.359284  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.359306  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.359312  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.359320  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.359326  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.359378  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.359398  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.359404  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.359411  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.359419  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.359456  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.359475  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.359482  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.359489  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.359495  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.359540  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.359557  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.359563  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.360024  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.360054  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.360060  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.361455  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.361491  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.361499  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.361524  331973 addons.go:475] Verifying addon registry=true in "addons-033173"
	I0803 23:04:17.362055  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.362069  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.362087  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.362101  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.356914  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.362158  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.362170  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.362843  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.363183  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.363200  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.363213  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.363241  331973 out.go:177] * Verifying ingress addon...
	I0803 23:04:17.364024  331973 out.go:177] * Verifying registry addon...
	I0803 23:04:17.364024  331973 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-033173 service yakd-dashboard -n yakd-dashboard
	
	I0803 23:04:17.366087  331973 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0803 23:04:17.366908  331973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0803 23:04:17.383356  331973 node_ready.go:49] node "addons-033173" has status "Ready":"True"
	I0803 23:04:17.383395  331973 node_ready.go:38] duration metric: took 27.057243ms for node "addons-033173" to be "Ready" ...
	I0803 23:04:17.383405  331973 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:04:17.384633  331973 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0803 23:04:17.384666  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:17.445086  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.445118  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.445458  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.445476  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	W0803 23:04:17.445598  331973 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0803 23:04:17.449643  331973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-449gj" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.465398  331973 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0803 23:04:17.465426  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:17.475091  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:17.475120  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:17.475488  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:17.475519  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:17.475538  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:17.534504  331973 pod_ready.go:92] pod "coredns-7db6d8ff4d-449gj" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:17.534540  331973 pod_ready.go:81] duration metric: took 84.859591ms for pod "coredns-7db6d8ff4d-449gj" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.534556  331973 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fklk2" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.579220  331973 pod_ready.go:92] pod "coredns-7db6d8ff4d-fklk2" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:17.579259  331973 pod_ready.go:81] duration metric: took 44.691548ms for pod "coredns-7db6d8ff4d-fklk2" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.579274  331973 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.613479  331973 pod_ready.go:92] pod "etcd-addons-033173" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:17.613525  331973 pod_ready.go:81] duration metric: took 34.242323ms for pod "etcd-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.613544  331973 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.642021  331973 pod_ready.go:92] pod "kube-apiserver-addons-033173" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:17.642047  331973 pod_ready.go:81] duration metric: took 28.493115ms for pod "kube-apiserver-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.642060  331973 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.656384  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 23:04:17.760153  331973 pod_ready.go:92] pod "kube-controller-manager-addons-033173" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:17.760183  331973 pod_ready.go:81] duration metric: took 118.114276ms for pod "kube-controller-manager-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.760198  331973 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tgt6z" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:17.860098  331973 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-033173" context rescaled to 1 replicas
	I0803 23:04:17.872541  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:17.874005  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:18.167483  331973 pod_ready.go:92] pod "kube-proxy-tgt6z" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:18.167512  331973 pod_ready.go:81] duration metric: took 407.307001ms for pod "kube-proxy-tgt6z" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:18.167523  331973 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:18.370361  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:18.371581  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:18.560271  331973 pod_ready.go:92] pod "kube-scheduler-addons-033173" in "kube-system" namespace has status "Ready":"True"
	I0803 23:04:18.560300  331973 pod_ready.go:81] duration metric: took 392.769879ms for pod "kube-scheduler-addons-033173" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:18.560311  331973 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace to be "Ready" ...
	I0803 23:04:18.875622  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:18.875992  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:19.383985  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:19.386723  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:19.925093  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:19.925348  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:20.210427  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.774715146s)
	I0803 23:04:20.210467  331973 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.970558232s)
	I0803 23:04:20.210495  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:20.210511  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:20.210585  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.554158666s)
	I0803 23:04:20.210699  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:20.210717  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:20.210827  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:20.210841  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:20.210854  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:20.210864  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:20.210872  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:20.211093  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:20.211109  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:20.211119  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:20.211130  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:20.212888  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:20.212902  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:20.212902  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:20.212917  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:20.212921  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:20.212923  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:20.212929  331973 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-033173"
	I0803 23:04:20.219477  331973 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 23:04:20.219492  331973 out.go:177] * Verifying csi-hostpath-driver addon...
	I0803 23:04:20.221098  331973 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0803 23:04:20.221863  331973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0803 23:04:20.222496  331973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0803 23:04:20.222517  331973 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0803 23:04:20.251158  331973 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0803 23:04:20.251189  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:20.294700  331973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0803 23:04:20.294727  331973 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0803 23:04:20.345733  331973 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 23:04:20.345764  331973 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0803 23:04:20.371560  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:20.373250  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:20.393491  331973 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 23:04:20.568011  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:20.728621  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:20.884293  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:20.884310  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:21.232753  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:21.376610  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:21.382498  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:21.771497  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:21.775912  331973 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.382366338s)
	I0803 23:04:21.775992  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:21.776016  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:21.776494  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:21.776515  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:21.776526  331973 main.go:141] libmachine: Making call to close driver server
	I0803 23:04:21.776534  331973 main.go:141] libmachine: (addons-033173) Calling .Close
	I0803 23:04:21.776883  331973 main.go:141] libmachine: (addons-033173) DBG | Closing plugin on server side
	I0803 23:04:21.776938  331973 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:04:21.776956  331973 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:04:21.778272  331973 addons.go:475] Verifying addon gcp-auth=true in "addons-033173"
	I0803 23:04:21.779979  331973 out.go:177] * Verifying gcp-auth addon...
	I0803 23:04:21.782361  331973 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0803 23:04:21.807264  331973 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0803 23:04:21.807288  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:21.875479  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:21.876306  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:22.228186  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:22.286645  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:22.373426  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:22.376041  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:22.727463  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:22.785665  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:22.870589  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:22.872606  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:23.067740  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:23.227577  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:23.286478  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:23.371672  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:23.375887  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:23.728370  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:23.786503  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:23.872927  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:23.875232  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:24.227052  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:24.286986  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:24.373220  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:24.375591  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:24.728239  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:24.786895  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:24.871771  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:24.872393  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:25.068607  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:25.561051  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:25.567681  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:25.568963  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:25.569113  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:25.730893  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:25.786540  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:25.871075  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:25.871879  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:26.228461  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:26.285799  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:26.372594  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:26.373340  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:26.728459  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:26.787447  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:26.871267  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:26.873111  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:27.227549  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:27.286151  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:27.371157  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:27.372299  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:27.567151  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:27.729617  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:27.787380  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:27.871034  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:27.871591  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:28.232322  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:28.289678  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:28.371451  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:28.373079  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:28.728550  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:28.785863  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:28.872755  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:28.873208  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:29.226649  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:29.285645  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:29.371318  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:29.375304  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:29.728442  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:29.786939  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:29.873073  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:29.873330  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:30.066687  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:30.227624  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:30.286536  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:30.371845  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:30.372144  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:30.997243  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:30.997529  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:31.000705  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:31.001031  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:31.228419  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:31.286813  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:31.371716  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:31.374110  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:31.727500  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:31.786112  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:31.873040  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:31.873489  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:32.069820  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:32.228414  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:32.286424  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:32.370277  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:32.375186  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:32.728741  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:32.786362  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:32.871775  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:32.872739  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:33.227681  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:33.286146  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:33.376451  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:33.376831  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:33.727777  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:33.787006  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:33.871065  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:33.878486  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:34.228267  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:34.287767  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:34.377025  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:34.380716  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:34.567579  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:34.727649  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:34.785876  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:34.871156  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:34.871927  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:35.227472  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:35.286609  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:35.375479  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:35.375682  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:35.728666  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:35.786562  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:35.872052  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:35.876311  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:36.227439  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:36.286627  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:36.370961  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:36.372364  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:36.569131  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:36.726955  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:36.786215  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:36.871751  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:36.871774  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:37.227666  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:37.288107  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:37.371870  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:37.374063  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:37.728095  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:37.786699  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:37.873221  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:37.873574  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:38.228121  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:38.289455  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:38.376596  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:38.376788  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:38.726851  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:38.786440  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:38.873053  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:38.877847  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:39.121629  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:39.228123  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:39.286033  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:39.371345  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:39.371617  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:39.727497  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:39.786215  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:39.873686  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:39.873828  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:40.227499  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:40.285716  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:40.370466  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:40.377551  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:40.728009  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:40.786037  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:40.872069  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:40.872306  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:41.228088  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:41.287699  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:41.372652  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:41.380186  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:41.568929  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:41.727868  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:41.786182  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:41.873032  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:41.873204  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:42.227804  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:42.286511  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:42.371096  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:42.373095  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:42.728059  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:42.785929  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:42.872492  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:42.872488  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:43.228010  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:43.286706  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:43.373338  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:43.377247  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:43.570584  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:43.731788  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:43.786399  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:43.875020  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:43.875374  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:44.410830  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:44.411044  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:44.411988  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:44.415101  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:44.727296  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:44.786698  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:44.871763  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:44.872672  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:45.228606  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:45.286434  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:45.373151  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:45.376852  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:45.727500  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:45.786302  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:45.872998  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:45.873153  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:46.066400  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:46.226961  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:46.286469  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:46.370586  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:46.372226  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:46.732325  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:46.787406  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:46.871766  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:46.872735  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:47.227634  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:47.286858  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:47.370592  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:47.372697  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:47.748082  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:47.802658  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:47.870971  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:47.873392  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:48.228694  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:48.290574  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:48.387495  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:48.389588  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:48.568103  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:48.727461  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:48.787033  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:48.874483  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:48.886941  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:49.479272  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:49.481737  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:49.482903  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:49.483696  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:49.727971  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:49.786203  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:49.872989  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:49.873673  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:50.229668  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:50.285807  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:50.376023  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:50.379316  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:50.570627  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:50.727999  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:50.786383  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:50.871144  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:50.874514  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:51.227851  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:51.287145  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:51.374461  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:51.374482  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:51.728099  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:51.786716  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:51.872243  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:51.872611  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:52.227509  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:52.285907  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:52.373629  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:52.374147  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:52.727709  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:52.786055  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:52.871271  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:52.872700  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:53.066765  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:53.227805  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:53.286156  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:53.372268  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:53.377760  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:53.728277  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:53.786492  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:53.870636  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:53.874060  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:54.227716  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:54.289301  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:54.372699  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:54.372698  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:54.728749  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:54.786965  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:54.872772  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:54.872828  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:55.068634  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:55.228443  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:55.286185  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:55.376363  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:55.376898  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:55.729463  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:55.788708  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:55.870718  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:55.875803  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:56.227413  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:56.286155  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:56.372947  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:56.373174  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:56.727902  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:56.787641  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:56.870678  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:56.872808  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:57.227513  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:57.285942  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:57.371924  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:57.372017  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:57.567408  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:57.727411  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:57.786619  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:57.870756  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:57.890225  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:58.227362  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:58.286347  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:58.373011  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:58.373425  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:58.727610  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:58.786180  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:58.872258  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:58.873358  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:59.227638  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:59.286269  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:59.375043  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:04:59.375125  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:59.568328  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:04:59.727278  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:04:59.786042  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:04:59.871956  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:04:59.872440  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:05:00.227215  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:00.288026  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:00.372046  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 23:05:00.376083  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:00.733080  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:00.786952  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:00.871023  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:00.871352  331973 kapi.go:107] duration metric: took 43.504444188s to wait for kubernetes.io/minikube-addons=registry ...
	I0803 23:05:01.228268  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:01.286433  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:01.372084  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:01.570764  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:01.730935  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:01.786721  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:02.168602  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:02.229205  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:02.286502  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:02.370466  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:02.728105  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:02.787009  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:02.871684  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:03.227328  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:03.288328  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:03.370992  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:03.729679  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:03.786103  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:03.870900  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:04.066269  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:04.226980  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:04.286530  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:04.370732  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:04.728927  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:04.786587  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:04.870502  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:05.228352  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:05.286796  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:05.371815  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:05.745475  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:05.796443  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:05.870176  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:06.066304  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:06.235132  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:06.286566  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:06.370553  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:06.727709  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:06.787720  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:06.870976  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:07.227878  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:07.324669  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:07.375320  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:07.727667  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:07.786107  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:07.870812  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:08.076324  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:08.227507  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:08.285854  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:08.371898  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:08.728417  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:08.786283  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:08.876847  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:09.425818  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:09.425846  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:09.426289  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:09.731028  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:09.794148  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:09.873921  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:10.227124  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:10.288681  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:10.370720  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:10.568704  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:10.729699  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:10.787922  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:10.873771  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:11.231370  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:11.287651  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:11.370269  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:11.728019  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:11.798017  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:11.871263  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:12.228354  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:12.285726  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:12.370377  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:12.571765  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:12.727671  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:12.786084  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:12.871262  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:13.227330  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:13.285522  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:13.370991  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:13.727370  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:13.785924  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:13.873460  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:14.229100  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:14.286817  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:14.370655  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:14.727506  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:14.786303  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:14.871522  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:15.066725  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:15.228050  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:15.286469  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:15.371027  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:15.881751  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:15.882138  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:15.887372  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:16.227696  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:16.285923  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:16.371072  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:16.728485  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:16.786587  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:16.870863  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:17.069918  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:17.228323  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:17.285873  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:17.371143  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:17.727940  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:17.786474  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:17.870549  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:18.229682  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 23:05:18.289096  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:18.371674  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:18.729101  331973 kapi.go:107] duration metric: took 58.507231282s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0803 23:05:18.786786  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:18.870821  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:19.286777  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:19.370971  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:19.566522  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:19.786265  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:19.871581  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:20.286963  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:20.370835  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:20.786501  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:20.871064  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:21.286486  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:21.369913  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:21.566688  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:21.785908  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:21.871750  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:22.287312  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:22.372704  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:22.787291  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:22.872885  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:23.287911  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:23.371425  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:23.567468  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:23.785931  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:23.870673  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:24.286189  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:24.370551  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:24.786851  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:24.871030  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:25.286648  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:25.370612  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:25.786327  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:25.870185  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:26.066409  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:26.287108  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:26.371041  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:26.785637  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:26.870928  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:27.286814  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:27.370585  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:27.887921  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:27.888703  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:28.069730  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:28.286634  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:28.371311  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:28.786384  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:28.873009  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:29.285488  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:29.370636  331973 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 23:05:29.785524  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:29.870515  331973 kapi.go:107] duration metric: took 1m12.504425546s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0803 23:05:30.070690  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:30.286651  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:30.786379  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:31.287251  331973 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 23:05:31.786368  331973 kapi.go:107] duration metric: took 1m10.004004799s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0803 23:05:31.787908  331973 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-033173 cluster.
	I0803 23:05:31.789252  331973 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0803 23:05:31.790450  331973 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0803 23:05:31.791591  331973 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, metrics-server, ingress-dns, inspektor-gadget, helm-tiller, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0803 23:05:31.792679  331973 addons.go:510] duration metric: took 1m23.504802811s for enable addons: enabled=[storage-provisioner nvidia-device-plugin metrics-server ingress-dns inspektor-gadget helm-tiller cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0803 23:05:32.568359  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:35.067065  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:37.067616  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:39.067858  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:41.567338  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:44.067248  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:46.070667  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:48.566424  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:50.567234  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:52.567473  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:55.067707  331973 pod_ready.go:102] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"False"
	I0803 23:05:56.068473  331973 pod_ready.go:92] pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace has status "Ready":"True"
	I0803 23:05:56.068497  331973 pod_ready.go:81] duration metric: took 1m37.50817924s for pod "metrics-server-c59844bb4-dwpwm" in "kube-system" namespace to be "Ready" ...
	I0803 23:05:56.068507  331973 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8pff4" in "kube-system" namespace to be "Ready" ...
	I0803 23:05:56.074765  331973 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-8pff4" in "kube-system" namespace has status "Ready":"True"
	I0803 23:05:56.074790  331973 pod_ready.go:81] duration metric: took 6.275948ms for pod "nvidia-device-plugin-daemonset-8pff4" in "kube-system" namespace to be "Ready" ...
	I0803 23:05:56.074807  331973 pod_ready.go:38] duration metric: took 1m38.691390204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:05:56.074828  331973 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:05:56.074877  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:05:56.074952  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:05:56.140184  331973 cri.go:89] found id: "99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2"
	I0803 23:05:56.140207  331973 cri.go:89] found id: ""
	I0803 23:05:56.140216  331973 logs.go:276] 1 containers: [99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2]
	I0803 23:05:56.140278  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.147196  331973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0803 23:05:56.147269  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:05:56.187685  331973 cri.go:89] found id: "b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9"
	I0803 23:05:56.187720  331973 cri.go:89] found id: ""
	I0803 23:05:56.187728  331973 logs.go:276] 1 containers: [b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9]
	I0803 23:05:56.187796  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.192598  331973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0803 23:05:56.192682  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:05:56.236882  331973 cri.go:89] found id: "ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229"
	I0803 23:05:56.236912  331973 cri.go:89] found id: ""
	I0803 23:05:56.236923  331973 logs.go:276] 1 containers: [ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229]
	I0803 23:05:56.236984  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.241185  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:05:56.241251  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:05:56.281871  331973 cri.go:89] found id: "3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4"
	I0803 23:05:56.281897  331973 cri.go:89] found id: ""
	I0803 23:05:56.281907  331973 logs.go:276] 1 containers: [3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4]
	I0803 23:05:56.281970  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.286514  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:05:56.286595  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:05:56.324124  331973 cri.go:89] found id: "d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1"
	I0803 23:05:56.324156  331973 cri.go:89] found id: ""
	I0803 23:05:56.324168  331973 logs.go:276] 1 containers: [d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1]
	I0803 23:05:56.324231  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.328433  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:05:56.328513  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:05:56.370890  331973 cri.go:89] found id: "926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc"
	I0803 23:05:56.370918  331973 cri.go:89] found id: ""
	I0803 23:05:56.370929  331973 logs.go:276] 1 containers: [926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc]
	I0803 23:05:56.370991  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:05:56.378562  331973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0803 23:05:56.378654  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:05:56.417015  331973 cri.go:89] found id: ""
	I0803 23:05:56.417043  331973 logs.go:276] 0 containers: []
	W0803 23:05:56.417052  331973 logs.go:278] No container was found matching "kindnet"
	I0803 23:05:56.417061  331973 logs.go:123] Gathering logs for kubelet ...
	I0803 23:05:56.417076  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0803 23:05:56.502539  331973 logs.go:123] Gathering logs for dmesg ...
	I0803 23:05:56.502586  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 23:05:56.518399  331973 logs.go:123] Gathering logs for describe nodes ...
	I0803 23:05:56.518436  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 23:05:56.727128  331973 logs.go:123] Gathering logs for etcd [b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9] ...
	I0803 23:05:56.727163  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9"
	I0803 23:05:56.786293  331973 logs.go:123] Gathering logs for coredns [ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229] ...
	I0803 23:05:56.786333  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229"
	I0803 23:05:56.831585  331973 logs.go:123] Gathering logs for kube-controller-manager [926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc] ...
	I0803 23:05:56.831621  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc"
	I0803 23:05:56.915756  331973 logs.go:123] Gathering logs for kube-apiserver [99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2] ...
	I0803 23:05:56.915806  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2"
	I0803 23:05:56.966478  331973 logs.go:123] Gathering logs for kube-scheduler [3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4] ...
	I0803 23:05:56.966521  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4"
	I0803 23:05:57.011959  331973 logs.go:123] Gathering logs for kube-proxy [d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1] ...
	I0803 23:05:57.011998  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1"
	I0803 23:05:57.049630  331973 logs.go:123] Gathering logs for CRI-O ...
	I0803 23:05:57.049663  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0803 23:05:58.114830  331973 logs.go:123] Gathering logs for container status ...
	I0803 23:05:58.114905  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 23:06:00.687346  331973 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:06:00.709224  331973 api_server.go:72] duration metric: took 1m52.421429525s to wait for apiserver process to appear ...
	I0803 23:06:00.709258  331973 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:06:00.709311  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:06:00.709388  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:06:00.750990  331973 cri.go:89] found id: "99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2"
	I0803 23:06:00.751018  331973 cri.go:89] found id: ""
	I0803 23:06:00.751029  331973 logs.go:276] 1 containers: [99cd8af102599e274582a5b8d5dc785e239f1aeabc91c4797781318ef2427ed2]
	I0803 23:06:00.751087  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:00.755273  331973 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0803 23:06:00.755366  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:06:00.794769  331973 cri.go:89] found id: "b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9"
	I0803 23:06:00.794797  331973 cri.go:89] found id: ""
	I0803 23:06:00.794807  331973 logs.go:276] 1 containers: [b8a4cf8941db83b559399a869f1e73bc20c0b73f7118b7dd715b4dc9ad2394c9]
	I0803 23:06:00.794883  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:00.799272  331973 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0803 23:06:00.799337  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:06:00.842551  331973 cri.go:89] found id: "ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229"
	I0803 23:06:00.842579  331973 cri.go:89] found id: ""
	I0803 23:06:00.842590  331973 logs.go:276] 1 containers: [ad5943411451d49b2c70562c1457cf6f2d6645ab21360d86649fa762e1b4a229]
	I0803 23:06:00.842654  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:00.848050  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:06:00.848123  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:06:00.915418  331973 cri.go:89] found id: "3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4"
	I0803 23:06:00.915447  331973 cri.go:89] found id: ""
	I0803 23:06:00.915458  331973 logs.go:276] 1 containers: [3c5bf9abcdaf9e84147eba02e6f052cb0bfd42c1d8cafffc46a7ca2c151fe7a4]
	I0803 23:06:00.915519  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:00.920182  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:06:00.920247  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:06:00.959873  331973 cri.go:89] found id: "d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1"
	I0803 23:06:00.959911  331973 cri.go:89] found id: ""
	I0803 23:06:00.959922  331973 logs.go:276] 1 containers: [d3e41eeab565723a5b99ad40d3a1c93a2e6be163c313483c872d90305196bfc1]
	I0803 23:06:00.959984  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:00.964354  331973 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:06:00.964426  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:06:01.006874  331973 cri.go:89] found id: "926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc"
	I0803 23:06:01.006899  331973 cri.go:89] found id: ""
	I0803 23:06:01.006909  331973 logs.go:276] 1 containers: [926e17381f97111bcb5818f9abac36cb06b49250b0e09244f2a22c3a7e4409fc]
	I0803 23:06:01.006965  331973 ssh_runner.go:195] Run: which crictl
	I0803 23:06:01.011733  331973 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0803 23:06:01.011801  331973 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:06:01.052773  331973 cri.go:89] found id: ""
	I0803 23:06:01.052808  331973 logs.go:276] 0 containers: []
	W0803 23:06:01.052817  331973 logs.go:278] No container was found matching "kindnet"
	I0803 23:06:01.052828  331973 logs.go:123] Gathering logs for CRI-O ...
	I0803 23:06:01.052847  331973 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-033173 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 node stop m02 -v=7 --alsologtostderr
E0803 23:53:05.380506  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:53:46.341165  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.483835631s)

                                                
                                                
-- stdout --
	* Stopping node "ha-349588-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:52:55.885068  350157 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:52:55.885345  350157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:52:55.885356  350157 out.go:304] Setting ErrFile to fd 2...
	I0803 23:52:55.885360  350157 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:52:55.885603  350157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:52:55.885870  350157 mustload.go:65] Loading cluster: ha-349588
	I0803 23:52:55.886239  350157 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:52:55.886266  350157 stop.go:39] StopHost: ha-349588-m02
	I0803 23:52:55.886721  350157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:52:55.886778  350157 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:52:55.902968  350157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0803 23:52:55.903464  350157 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:52:55.904115  350157 main.go:141] libmachine: Using API Version  1
	I0803 23:52:55.904146  350157 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:52:55.904499  350157 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:52:55.906791  350157 out.go:177] * Stopping node "ha-349588-m02"  ...
	I0803 23:52:55.908010  350157 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:52:55.908040  350157 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:52:55.908298  350157 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:52:55.908338  350157 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:52:55.911367  350157 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:52:55.911848  350157 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:52:55.911889  350157 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:52:55.912029  350157 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:52:55.912199  350157 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:52:55.912363  350157 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:52:55.912472  350157 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:52:55.997103  350157 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:52:56.054339  350157 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:52:56.114424  350157 main.go:141] libmachine: Stopping "ha-349588-m02"...
	I0803 23:52:56.114463  350157 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:52:56.116017  350157 main.go:141] libmachine: (ha-349588-m02) Calling .Stop
	I0803 23:52:56.119752  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 0/120
	I0803 23:52:57.121111  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 1/120
	I0803 23:52:58.122741  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 2/120
	I0803 23:52:59.124486  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 3/120
	I0803 23:53:00.126418  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 4/120
	I0803 23:53:01.128560  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 5/120
	I0803 23:53:02.130697  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 6/120
	I0803 23:53:03.131930  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 7/120
	I0803 23:53:04.133392  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 8/120
	I0803 23:53:05.134961  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 9/120
	I0803 23:53:06.136892  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 10/120
	I0803 23:53:07.138260  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 11/120
	I0803 23:53:08.140021  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 12/120
	I0803 23:53:09.141422  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 13/120
	I0803 23:53:10.143118  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 14/120
	I0803 23:53:11.144890  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 15/120
	I0803 23:53:12.146422  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 16/120
	I0803 23:53:13.148036  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 17/120
	I0803 23:53:14.149353  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 18/120
	I0803 23:53:15.151042  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 19/120
	I0803 23:53:16.153350  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 20/120
	I0803 23:53:17.154741  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 21/120
	I0803 23:53:18.156145  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 22/120
	I0803 23:53:19.158201  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 23/120
	I0803 23:53:20.160469  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 24/120
	I0803 23:53:21.162647  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 25/120
	I0803 23:53:22.164049  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 26/120
	I0803 23:53:23.165349  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 27/120
	I0803 23:53:24.166975  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 28/120
	I0803 23:53:25.168859  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 29/120
	I0803 23:53:26.171235  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 30/120
	I0803 23:53:27.172956  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 31/120
	I0803 23:53:28.174498  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 32/120
	I0803 23:53:29.176169  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 33/120
	I0803 23:53:30.177709  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 34/120
	I0803 23:53:31.179858  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 35/120
	I0803 23:53:32.181314  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 36/120
	I0803 23:53:33.182621  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 37/120
	I0803 23:53:34.184088  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 38/120
	I0803 23:53:35.185292  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 39/120
	I0803 23:53:36.187095  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 40/120
	I0803 23:53:37.188526  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 41/120
	I0803 23:53:38.190847  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 42/120
	I0803 23:53:39.193135  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 43/120
	I0803 23:53:40.194968  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 44/120
	I0803 23:53:41.196993  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 45/120
	I0803 23:53:42.198647  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 46/120
	I0803 23:53:43.200498  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 47/120
	I0803 23:53:44.202403  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 48/120
	I0803 23:53:45.204110  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 49/120
	I0803 23:53:46.205679  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 50/120
	I0803 23:53:47.206925  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 51/120
	I0803 23:53:48.208577  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 52/120
	I0803 23:53:49.210811  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 53/120
	I0803 23:53:50.212195  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 54/120
	I0803 23:53:51.214453  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 55/120
	I0803 23:53:52.216258  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 56/120
	I0803 23:53:53.217763  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 57/120
	I0803 23:53:54.220430  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 58/120
	I0803 23:53:55.221875  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 59/120
	I0803 23:53:56.224058  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 60/120
	I0803 23:53:57.225373  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 61/120
	I0803 23:53:58.226782  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 62/120
	I0803 23:53:59.228123  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 63/120
	I0803 23:54:00.229450  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 64/120
	I0803 23:54:01.231376  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 65/120
	I0803 23:54:02.233333  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 66/120
	I0803 23:54:03.235130  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 67/120
	I0803 23:54:04.236471  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 68/120
	I0803 23:54:05.237860  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 69/120
	I0803 23:54:06.239775  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 70/120
	I0803 23:54:07.241362  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 71/120
	I0803 23:54:08.243048  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 72/120
	I0803 23:54:09.244695  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 73/120
	I0803 23:54:10.246242  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 74/120
	I0803 23:54:11.248087  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 75/120
	I0803 23:54:12.250307  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 76/120
	I0803 23:54:13.251659  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 77/120
	I0803 23:54:14.253457  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 78/120
	I0803 23:54:15.254807  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 79/120
	I0803 23:54:16.256604  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 80/120
	I0803 23:54:17.258157  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 81/120
	I0803 23:54:18.259999  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 82/120
	I0803 23:54:19.261457  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 83/120
	I0803 23:54:20.263051  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 84/120
	I0803 23:54:21.264859  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 85/120
	I0803 23:54:22.266505  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 86/120
	I0803 23:54:23.268297  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 87/120
	I0803 23:54:24.269999  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 88/120
	I0803 23:54:25.272051  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 89/120
	I0803 23:54:26.274670  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 90/120
	I0803 23:54:27.276569  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 91/120
	I0803 23:54:28.278156  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 92/120
	I0803 23:54:29.280147  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 93/120
	I0803 23:54:30.281629  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 94/120
	I0803 23:54:31.282897  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 95/120
	I0803 23:54:32.284297  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 96/120
	I0803 23:54:33.286046  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 97/120
	I0803 23:54:34.288081  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 98/120
	I0803 23:54:35.289601  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 99/120
	I0803 23:54:36.291725  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 100/120
	I0803 23:54:37.293703  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 101/120
	I0803 23:54:38.295919  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 102/120
	I0803 23:54:39.297440  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 103/120
	I0803 23:54:40.298781  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 104/120
	I0803 23:54:41.300535  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 105/120
	I0803 23:54:42.302075  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 106/120
	I0803 23:54:43.303616  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 107/120
	I0803 23:54:44.305371  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 108/120
	I0803 23:54:45.306780  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 109/120
	I0803 23:54:46.308950  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 110/120
	I0803 23:54:47.310450  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 111/120
	I0803 23:54:48.312138  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 112/120
	I0803 23:54:49.313485  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 113/120
	I0803 23:54:50.314909  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 114/120
	I0803 23:54:51.316777  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 115/120
	I0803 23:54:52.318257  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 116/120
	I0803 23:54:53.320232  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 117/120
	I0803 23:54:54.322382  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 118/120
	I0803 23:54:55.323730  350157 main.go:141] libmachine: (ha-349588-m02) Waiting for machine to stop 119/120
	I0803 23:54:56.324285  350157 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0803 23:54:56.324468  350157 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-349588 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
E0803 23:55:08.262104  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (19.140919097s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:54:56.372578  350614 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:54:56.372724  350614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:54:56.372732  350614 out.go:304] Setting ErrFile to fd 2...
	I0803 23:54:56.372749  350614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:54:56.372962  350614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:54:56.373149  350614 out.go:298] Setting JSON to false
	I0803 23:54:56.373184  350614 mustload.go:65] Loading cluster: ha-349588
	I0803 23:54:56.373229  350614 notify.go:220] Checking for updates...
	I0803 23:54:56.373667  350614 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:54:56.373698  350614 status.go:255] checking status of ha-349588 ...
	I0803 23:54:56.374076  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.374146  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.391550  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0803 23:54:56.392083  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.392712  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.392736  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.393135  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.393344  350614 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:54:56.395039  350614 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:54:56.395065  350614 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:54:56.395398  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.395453  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.412245  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0803 23:54:56.412754  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.413237  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.413267  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.413679  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.413911  350614 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:54:56.417066  350614 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:54:56.417546  350614 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:54:56.417603  350614 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:54:56.417671  350614 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:54:56.417989  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.418023  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.435091  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0803 23:54:56.435505  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.435994  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.436023  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.436340  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.436562  350614 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:54:56.436751  350614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:54:56.436791  350614 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:54:56.439743  350614 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:54:56.440225  350614 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:54:56.440254  350614 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:54:56.440407  350614 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:54:56.440598  350614 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:54:56.440773  350614 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:54:56.440915  350614 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:54:56.524664  350614 ssh_runner.go:195] Run: systemctl --version
	I0803 23:54:56.533615  350614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:54:56.553720  350614 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:54:56.553760  350614 api_server.go:166] Checking apiserver status ...
	I0803 23:54:56.553815  350614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:54:56.571305  350614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:54:56.582313  350614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:54:56.582375  350614 ssh_runner.go:195] Run: ls
	I0803 23:54:56.587242  350614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:54:56.594203  350614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:54:56.594236  350614 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:54:56.594250  350614 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:54:56.594274  350614 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:54:56.594599  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.594645  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.610640  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0803 23:54:56.611129  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.611641  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.611684  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.612038  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.612241  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:54:56.613884  350614 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:54:56.613904  350614 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:54:56.614225  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.614292  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.629630  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0803 23:54:56.630214  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.630769  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.630797  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.631172  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.631393  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:54:56.634484  350614 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:54:56.634975  350614 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:54:56.635004  350614 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:54:56.635151  350614 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:54:56.635472  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:54:56.635518  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:54:56.651595  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38553
	I0803 23:54:56.652111  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:54:56.652618  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:54:56.652638  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:54:56.652993  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:54:56.653185  350614 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:54:56.653410  350614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:54:56.653433  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:54:56.656415  350614 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:54:56.656853  350614 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:54:56.656880  350614 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:54:56.657057  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:54:56.657243  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:54:56.657453  350614 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:54:56.657621  350614 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:15.085831  350614 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:15.085971  350614 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:15.085996  350614 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:15.086004  350614 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:15.086054  350614 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:15.086065  350614 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:15.086422  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.086482  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.102075  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39053
	I0803 23:55:15.102581  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.103220  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.103244  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.103621  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.103843  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:15.105708  350614 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:15.105725  350614 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:15.106035  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.106076  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.121252  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35353
	I0803 23:55:15.121810  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.122303  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.122326  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.122687  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.122899  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:15.125694  350614 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:15.126168  350614 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:15.126188  350614 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:15.126376  350614 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:15.126827  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.126882  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.142618  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I0803 23:55:15.143162  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.143780  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.143805  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.144124  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.144327  350614 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:15.144532  350614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:15.144554  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:15.147494  350614 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:15.148003  350614 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:15.148035  350614 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:15.148163  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:15.148376  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:15.148650  350614 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:15.148814  350614 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:15.240584  350614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:15.260453  350614 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:15.260487  350614 api_server.go:166] Checking apiserver status ...
	I0803 23:55:15.260533  350614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:15.278366  350614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:15.289092  350614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:15.289181  350614 ssh_runner.go:195] Run: ls
	I0803 23:55:15.294925  350614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:15.299399  350614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:15.299431  350614 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:15.299444  350614 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:15.299501  350614 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:15.299872  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.299913  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.315313  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0803 23:55:15.315787  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.316329  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.316353  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.316668  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.316850  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:15.318686  350614 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:15.318709  350614 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:15.319165  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.319224  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.334839  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I0803 23:55:15.335302  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.335753  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.335777  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.336132  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.336309  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:15.339078  350614 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:15.339574  350614 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:15.339616  350614 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:15.339773  350614 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:15.340108  350614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:15.340159  350614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:15.355441  350614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33187
	I0803 23:55:15.355938  350614 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:15.356480  350614 main.go:141] libmachine: Using API Version  1
	I0803 23:55:15.356508  350614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:15.356839  350614 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:15.357049  350614 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:15.357238  350614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:15.357260  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:15.360109  350614 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:15.360538  350614 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:15.360564  350614 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:15.360701  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:15.360901  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:15.361055  350614 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:15.361209  350614 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:15.446722  350614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:15.464933  350614 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-349588 -n ha-349588
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-349588 logs -n 25: (1.484477774s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m03_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m04 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp testdata/cp-test.txt                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m04_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03:/home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m03 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-349588 node stop m02 -v=7                                                     | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:48:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:48:09.418625  346092 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:48:09.418752  346092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:48:09.418762  346092 out.go:304] Setting ErrFile to fd 2...
	I0803 23:48:09.418768  346092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:48:09.418971  346092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:48:09.419574  346092 out.go:298] Setting JSON to false
	I0803 23:48:09.420569  346092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30637,"bootTime":1722698252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:48:09.420633  346092 start.go:139] virtualization: kvm guest
	I0803 23:48:09.422786  346092 out.go:177] * [ha-349588] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:48:09.424092  346092 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:48:09.424144  346092 notify.go:220] Checking for updates...
	I0803 23:48:09.426416  346092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:48:09.427707  346092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:48:09.429120  346092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.430526  346092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:48:09.431632  346092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:48:09.432954  346092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:48:09.470138  346092 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:48:09.471317  346092 start.go:297] selected driver: kvm2
	I0803 23:48:09.471334  346092 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:48:09.471347  346092 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:48:09.472158  346092 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:48:09.472262  346092 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:48:09.488603  346092 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:48:09.488655  346092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:48:09.488888  346092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:48:09.488957  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:09.488969  346092 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 23:48:09.488977  346092 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 23:48:09.489047  346092 start.go:340] cluster config:
	{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0803 23:48:09.489163  346092 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:48:09.490895  346092 out.go:177] * Starting "ha-349588" primary control-plane node in "ha-349588" cluster
	I0803 23:48:09.491984  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:48:09.492039  346092 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:48:09.492063  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:48:09.492163  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:48:09.492174  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:48:09.492520  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:48:09.492548  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json: {Name:mk903cfda9df964846737e7e0ecec8ea46a5827c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:09.492717  346092 start.go:360] acquireMachinesLock for ha-349588: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:48:09.492747  346092 start.go:364] duration metric: took 17.293µs to acquireMachinesLock for "ha-349588"
	I0803 23:48:09.492765  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:48:09.492824  346092 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:48:09.494421  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:48:09.494578  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:48:09.494618  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:48:09.509993  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0803 23:48:09.510451  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:48:09.511049  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:48:09.511070  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:48:09.511439  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:48:09.511701  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:09.511862  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:09.512020  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:48:09.512050  346092 client.go:168] LocalClient.Create starting
	I0803 23:48:09.512089  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:48:09.512149  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:48:09.512174  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:48:09.512252  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:48:09.512279  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:48:09.512295  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:48:09.512320  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:48:09.512339  346092 main.go:141] libmachine: (ha-349588) Calling .PreCreateCheck
	I0803 23:48:09.512682  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:09.513102  346092 main.go:141] libmachine: Creating machine...
	I0803 23:48:09.513120  346092 main.go:141] libmachine: (ha-349588) Calling .Create
	I0803 23:48:09.513250  346092 main.go:141] libmachine: (ha-349588) Creating KVM machine...
	I0803 23:48:09.514581  346092 main.go:141] libmachine: (ha-349588) DBG | found existing default KVM network
	I0803 23:48:09.515280  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.515113  346115 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0803 23:48:09.515306  346092 main.go:141] libmachine: (ha-349588) DBG | created network xml: 
	I0803 23:48:09.515319  346092 main.go:141] libmachine: (ha-349588) DBG | <network>
	I0803 23:48:09.515327  346092 main.go:141] libmachine: (ha-349588) DBG |   <name>mk-ha-349588</name>
	I0803 23:48:09.515341  346092 main.go:141] libmachine: (ha-349588) DBG |   <dns enable='no'/>
	I0803 23:48:09.515351  346092 main.go:141] libmachine: (ha-349588) DBG |   
	I0803 23:48:09.515360  346092 main.go:141] libmachine: (ha-349588) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 23:48:09.515366  346092 main.go:141] libmachine: (ha-349588) DBG |     <dhcp>
	I0803 23:48:09.515374  346092 main.go:141] libmachine: (ha-349588) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 23:48:09.515391  346092 main.go:141] libmachine: (ha-349588) DBG |     </dhcp>
	I0803 23:48:09.515412  346092 main.go:141] libmachine: (ha-349588) DBG |   </ip>
	I0803 23:48:09.515424  346092 main.go:141] libmachine: (ha-349588) DBG |   
	I0803 23:48:09.515430  346092 main.go:141] libmachine: (ha-349588) DBG | </network>
	I0803 23:48:09.515435  346092 main.go:141] libmachine: (ha-349588) DBG | 
	I0803 23:48:09.520559  346092 main.go:141] libmachine: (ha-349588) DBG | trying to create private KVM network mk-ha-349588 192.168.39.0/24...
	I0803 23:48:09.590357  346092 main.go:141] libmachine: (ha-349588) DBG | private KVM network mk-ha-349588 192.168.39.0/24 created
	I0803 23:48:09.590389  346092 main.go:141] libmachine: (ha-349588) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 ...
	I0803 23:48:09.590434  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.590305  346115 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.590461  346092 main.go:141] libmachine: (ha-349588) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:48:09.590489  346092 main.go:141] libmachine: (ha-349588) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:48:09.872162  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.871931  346115 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa...
	I0803 23:48:09.925823  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.925663  346115 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/ha-349588.rawdisk...
	I0803 23:48:09.925875  346092 main.go:141] libmachine: (ha-349588) DBG | Writing magic tar header
	I0803 23:48:09.925892  346092 main.go:141] libmachine: (ha-349588) DBG | Writing SSH key tar header
	I0803 23:48:09.925900  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.925798  346115 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 ...
	I0803 23:48:09.925912  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588
	I0803 23:48:09.925995  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 (perms=drwx------)
	I0803 23:48:09.926018  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:48:09.926030  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:48:09.926051  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:48:09.926063  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.926077  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:48:09.926086  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:48:09.926094  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:48:09.926102  346092 main.go:141] libmachine: (ha-349588) Creating domain...
	I0803 23:48:09.926112  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:48:09.926120  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:48:09.926126  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:48:09.926135  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home
	I0803 23:48:09.926168  346092 main.go:141] libmachine: (ha-349588) DBG | Skipping /home - not owner
	I0803 23:48:09.927340  346092 main.go:141] libmachine: (ha-349588) define libvirt domain using xml: 
	I0803 23:48:09.927363  346092 main.go:141] libmachine: (ha-349588) <domain type='kvm'>
	I0803 23:48:09.927373  346092 main.go:141] libmachine: (ha-349588)   <name>ha-349588</name>
	I0803 23:48:09.927384  346092 main.go:141] libmachine: (ha-349588)   <memory unit='MiB'>2200</memory>
	I0803 23:48:09.927393  346092 main.go:141] libmachine: (ha-349588)   <vcpu>2</vcpu>
	I0803 23:48:09.927402  346092 main.go:141] libmachine: (ha-349588)   <features>
	I0803 23:48:09.927414  346092 main.go:141] libmachine: (ha-349588)     <acpi/>
	I0803 23:48:09.927420  346092 main.go:141] libmachine: (ha-349588)     <apic/>
	I0803 23:48:09.927429  346092 main.go:141] libmachine: (ha-349588)     <pae/>
	I0803 23:48:09.927443  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927452  346092 main.go:141] libmachine: (ha-349588)   </features>
	I0803 23:48:09.927466  346092 main.go:141] libmachine: (ha-349588)   <cpu mode='host-passthrough'>
	I0803 23:48:09.927474  346092 main.go:141] libmachine: (ha-349588)   
	I0803 23:48:09.927485  346092 main.go:141] libmachine: (ha-349588)   </cpu>
	I0803 23:48:09.927493  346092 main.go:141] libmachine: (ha-349588)   <os>
	I0803 23:48:09.927500  346092 main.go:141] libmachine: (ha-349588)     <type>hvm</type>
	I0803 23:48:09.927509  346092 main.go:141] libmachine: (ha-349588)     <boot dev='cdrom'/>
	I0803 23:48:09.927519  346092 main.go:141] libmachine: (ha-349588)     <boot dev='hd'/>
	I0803 23:48:09.927528  346092 main.go:141] libmachine: (ha-349588)     <bootmenu enable='no'/>
	I0803 23:48:09.927541  346092 main.go:141] libmachine: (ha-349588)   </os>
	I0803 23:48:09.927557  346092 main.go:141] libmachine: (ha-349588)   <devices>
	I0803 23:48:09.927567  346092 main.go:141] libmachine: (ha-349588)     <disk type='file' device='cdrom'>
	I0803 23:48:09.927580  346092 main.go:141] libmachine: (ha-349588)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/boot2docker.iso'/>
	I0803 23:48:09.927595  346092 main.go:141] libmachine: (ha-349588)       <target dev='hdc' bus='scsi'/>
	I0803 23:48:09.927606  346092 main.go:141] libmachine: (ha-349588)       <readonly/>
	I0803 23:48:09.927615  346092 main.go:141] libmachine: (ha-349588)     </disk>
	I0803 23:48:09.927625  346092 main.go:141] libmachine: (ha-349588)     <disk type='file' device='disk'>
	I0803 23:48:09.927637  346092 main.go:141] libmachine: (ha-349588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:48:09.927655  346092 main.go:141] libmachine: (ha-349588)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/ha-349588.rawdisk'/>
	I0803 23:48:09.927670  346092 main.go:141] libmachine: (ha-349588)       <target dev='hda' bus='virtio'/>
	I0803 23:48:09.927681  346092 main.go:141] libmachine: (ha-349588)     </disk>
	I0803 23:48:09.927691  346092 main.go:141] libmachine: (ha-349588)     <interface type='network'>
	I0803 23:48:09.927707  346092 main.go:141] libmachine: (ha-349588)       <source network='mk-ha-349588'/>
	I0803 23:48:09.927717  346092 main.go:141] libmachine: (ha-349588)       <model type='virtio'/>
	I0803 23:48:09.927728  346092 main.go:141] libmachine: (ha-349588)     </interface>
	I0803 23:48:09.927743  346092 main.go:141] libmachine: (ha-349588)     <interface type='network'>
	I0803 23:48:09.927756  346092 main.go:141] libmachine: (ha-349588)       <source network='default'/>
	I0803 23:48:09.927766  346092 main.go:141] libmachine: (ha-349588)       <model type='virtio'/>
	I0803 23:48:09.927776  346092 main.go:141] libmachine: (ha-349588)     </interface>
	I0803 23:48:09.927786  346092 main.go:141] libmachine: (ha-349588)     <serial type='pty'>
	I0803 23:48:09.927795  346092 main.go:141] libmachine: (ha-349588)       <target port='0'/>
	I0803 23:48:09.927804  346092 main.go:141] libmachine: (ha-349588)     </serial>
	I0803 23:48:09.927829  346092 main.go:141] libmachine: (ha-349588)     <console type='pty'>
	I0803 23:48:09.927851  346092 main.go:141] libmachine: (ha-349588)       <target type='serial' port='0'/>
	I0803 23:48:09.927862  346092 main.go:141] libmachine: (ha-349588)     </console>
	I0803 23:48:09.927868  346092 main.go:141] libmachine: (ha-349588)     <rng model='virtio'>
	I0803 23:48:09.927877  346092 main.go:141] libmachine: (ha-349588)       <backend model='random'>/dev/random</backend>
	I0803 23:48:09.927883  346092 main.go:141] libmachine: (ha-349588)     </rng>
	I0803 23:48:09.927888  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927892  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927898  346092 main.go:141] libmachine: (ha-349588)   </devices>
	I0803 23:48:09.927904  346092 main.go:141] libmachine: (ha-349588) </domain>
	I0803 23:48:09.927911  346092 main.go:141] libmachine: (ha-349588) 
	I0803 23:48:09.932195  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:c8:e6:c5 in network default
	I0803 23:48:09.932825  346092 main.go:141] libmachine: (ha-349588) Ensuring networks are active...
	I0803 23:48:09.932848  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:09.933616  346092 main.go:141] libmachine: (ha-349588) Ensuring network default is active
	I0803 23:48:09.933995  346092 main.go:141] libmachine: (ha-349588) Ensuring network mk-ha-349588 is active
	I0803 23:48:09.934553  346092 main.go:141] libmachine: (ha-349588) Getting domain xml...
	I0803 23:48:09.935413  346092 main.go:141] libmachine: (ha-349588) Creating domain...
	I0803 23:48:11.143458  346092 main.go:141] libmachine: (ha-349588) Waiting to get IP...
	I0803 23:48:11.144201  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.144600  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.144647  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.144590  346115 retry.go:31] will retry after 217.821157ms: waiting for machine to come up
	I0803 23:48:11.364303  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.364800  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.364827  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.364747  346115 retry.go:31] will retry after 290.305806ms: waiting for machine to come up
	I0803 23:48:11.656462  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.656882  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.656915  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.656819  346115 retry.go:31] will retry after 307.829475ms: waiting for machine to come up
	I0803 23:48:11.966421  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.966824  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.966854  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.966778  346115 retry.go:31] will retry after 424.675082ms: waiting for machine to come up
	I0803 23:48:12.393572  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:12.394043  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:12.394075  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:12.393985  346115 retry.go:31] will retry after 469.819501ms: waiting for machine to come up
	I0803 23:48:12.865672  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:12.866068  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:12.866113  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:12.866030  346115 retry.go:31] will retry after 703.183302ms: waiting for machine to come up
	I0803 23:48:13.571033  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:13.571450  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:13.571536  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:13.571432  346115 retry.go:31] will retry after 1.123702351s: waiting for machine to come up
	I0803 23:48:14.696577  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:14.697000  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:14.697052  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:14.696947  346115 retry.go:31] will retry after 1.12664628s: waiting for machine to come up
	I0803 23:48:15.824971  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:15.825444  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:15.825471  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:15.825370  346115 retry.go:31] will retry after 1.337432737s: waiting for machine to come up
	I0803 23:48:17.164972  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:17.165341  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:17.165365  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:17.165301  346115 retry.go:31] will retry after 1.584311544s: waiting for machine to come up
	I0803 23:48:18.752092  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:18.752563  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:18.752599  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:18.752497  346115 retry.go:31] will retry after 2.404172369s: waiting for machine to come up
	I0803 23:48:21.159266  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:21.159722  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:21.159746  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:21.159660  346115 retry.go:31] will retry after 3.566530198s: waiting for machine to come up
	I0803 23:48:24.727868  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:24.728217  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:24.728244  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:24.728197  346115 retry.go:31] will retry after 4.050810748s: waiting for machine to come up
	I0803 23:48:28.782752  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:28.783279  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:28.783306  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:28.783240  346115 retry.go:31] will retry after 4.340405118s: waiting for machine to come up
	I0803 23:48:33.126682  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.127152  346092 main.go:141] libmachine: (ha-349588) Found IP for machine: 192.168.39.168
	I0803 23:48:33.127176  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has current primary IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.127210  346092 main.go:141] libmachine: (ha-349588) Reserving static IP address...
	I0803 23:48:33.127440  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find host DHCP lease matching {name: "ha-349588", mac: "52:54:00:d9:f9:50", ip: "192.168.39.168"} in network mk-ha-349588
	I0803 23:48:33.206830  346092 main.go:141] libmachine: (ha-349588) DBG | Getting to WaitForSSH function...
	I0803 23:48:33.206864  346092 main.go:141] libmachine: (ha-349588) Reserved static IP address: 192.168.39.168
	I0803 23:48:33.206877  346092 main.go:141] libmachine: (ha-349588) Waiting for SSH to be available...
	I0803 23:48:33.209538  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.209926  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588
	I0803 23:48:33.209953  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find defined IP address of network mk-ha-349588 interface with MAC address 52:54:00:d9:f9:50
	I0803 23:48:33.210115  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH client type: external
	I0803 23:48:33.210168  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa (-rw-------)
	I0803 23:48:33.210224  346092 main.go:141] libmachine: (ha-349588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:48:33.210244  346092 main.go:141] libmachine: (ha-349588) DBG | About to run SSH command:
	I0803 23:48:33.210260  346092 main.go:141] libmachine: (ha-349588) DBG | exit 0
	I0803 23:48:33.214010  346092 main.go:141] libmachine: (ha-349588) DBG | SSH cmd err, output: exit status 255: 
	I0803 23:48:33.214077  346092 main.go:141] libmachine: (ha-349588) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0803 23:48:33.214103  346092 main.go:141] libmachine: (ha-349588) DBG | command : exit 0
	I0803 23:48:33.214117  346092 main.go:141] libmachine: (ha-349588) DBG | err     : exit status 255
	I0803 23:48:33.214129  346092 main.go:141] libmachine: (ha-349588) DBG | output  : 
	I0803 23:48:36.215923  346092 main.go:141] libmachine: (ha-349588) DBG | Getting to WaitForSSH function...
	I0803 23:48:36.218572  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.218985  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.219013  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.219082  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH client type: external
	I0803 23:48:36.219103  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa (-rw-------)
	I0803 23:48:36.219141  346092 main.go:141] libmachine: (ha-349588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:48:36.219157  346092 main.go:141] libmachine: (ha-349588) DBG | About to run SSH command:
	I0803 23:48:36.219170  346092 main.go:141] libmachine: (ha-349588) DBG | exit 0
	I0803 23:48:36.337862  346092 main.go:141] libmachine: (ha-349588) DBG | SSH cmd err, output: <nil>: 
	I0803 23:48:36.338165  346092 main.go:141] libmachine: (ha-349588) KVM machine creation complete!
	I0803 23:48:36.338513  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:36.339091  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:36.339286  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:36.339413  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:48:36.339424  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:48:36.340646  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:48:36.340662  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:48:36.340669  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:48:36.340676  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.342849  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.343214  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.343243  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.343346  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.343540  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.343678  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.343794  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.343956  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.344188  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.344202  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:48:36.441183  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:48:36.441208  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:48:36.441216  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.443990  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.444394  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.444424  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.444612  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.444811  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.444973  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.445104  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.445241  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.445426  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.445437  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:48:36.542286  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:48:36.542365  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:48:36.542371  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:48:36.542384  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.542688  346092 buildroot.go:166] provisioning hostname "ha-349588"
	I0803 23:48:36.542739  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.542966  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.545919  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.546337  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.546368  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.546552  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.546755  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.546913  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.547066  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.547213  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.547411  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.547426  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588 && echo "ha-349588" | sudo tee /etc/hostname
	I0803 23:48:36.660783  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:48:36.660811  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.663757  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.664197  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.664222  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.664426  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.664653  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.664851  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.664993  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.665167  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.665347  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.665362  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:48:36.771045  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:48:36.771076  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:48:36.771125  346092 buildroot.go:174] setting up certificates
	I0803 23:48:36.771143  346092 provision.go:84] configureAuth start
	I0803 23:48:36.771157  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.771474  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:36.774122  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.774504  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.774536  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.774645  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.776986  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.777284  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.777333  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.777497  346092 provision.go:143] copyHostCerts
	I0803 23:48:36.777544  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:48:36.777581  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:48:36.777591  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:48:36.777659  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:48:36.777742  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:48:36.777760  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:48:36.777766  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:48:36.777790  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:48:36.777832  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:48:36.777848  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:48:36.777854  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:48:36.777874  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:48:36.777921  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588 san=[127.0.0.1 192.168.39.168 ha-349588 localhost minikube]
	I0803 23:48:36.891183  346092 provision.go:177] copyRemoteCerts
	I0803 23:48:36.891251  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:48:36.891279  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.894188  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.894510  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.894544  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.894727  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.894957  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.895157  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.895313  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:36.978456  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:48:36.978533  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:48:37.004096  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:48:37.004172  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0803 23:48:37.028761  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:48:37.028864  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:48:37.053090  346092 provision.go:87] duration metric: took 281.906542ms to configureAuth
	I0803 23:48:37.053131  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:48:37.053320  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:48:37.053406  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.056081  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.056541  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.056567  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.056725  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.056959  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.057168  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.057334  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.057499  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:37.057703  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:37.057719  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:48:37.329708  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:48:37.329744  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:48:37.329755  346092 main.go:141] libmachine: (ha-349588) Calling .GetURL
	I0803 23:48:37.331027  346092 main.go:141] libmachine: (ha-349588) DBG | Using libvirt version 6000000
	I0803 23:48:37.333248  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.333780  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.333808  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.333977  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:48:37.333999  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:48:37.334010  346092 client.go:171] duration metric: took 27.821945455s to LocalClient.Create
	I0803 23:48:37.334044  346092 start.go:167] duration metric: took 27.822025189s to libmachine.API.Create "ha-349588"
	I0803 23:48:37.334056  346092 start.go:293] postStartSetup for "ha-349588" (driver="kvm2")
	I0803 23:48:37.334065  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:48:37.334081  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.334393  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:48:37.334417  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.336642  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.336927  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.336953  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.337119  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.337290  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.337446  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.337616  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.416116  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:48:37.420420  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:48:37.420451  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:48:37.420522  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:48:37.420630  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:48:37.420645  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:48:37.420778  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:48:37.430694  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:48:37.459257  346092 start.go:296] duration metric: took 125.186102ms for postStartSetup
	I0803 23:48:37.459317  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:37.459978  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:37.463817  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.464170  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.464194  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.464482  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:48:37.464696  346092 start.go:128] duration metric: took 27.971861416s to createHost
	I0803 23:48:37.464731  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.466929  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.467283  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.467311  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.467442  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.467641  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.467814  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.467939  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.468075  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:37.468271  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:37.468281  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:48:37.566205  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728917.546761844
	
	I0803 23:48:37.566231  346092 fix.go:216] guest clock: 1722728917.546761844
	I0803 23:48:37.566238  346092 fix.go:229] Guest: 2024-08-03 23:48:37.546761844 +0000 UTC Remote: 2024-08-03 23:48:37.464710805 +0000 UTC m=+28.082129480 (delta=82.051039ms)
	I0803 23:48:37.566259  346092 fix.go:200] guest clock delta is within tolerance: 82.051039ms
	I0803 23:48:37.566264  346092 start.go:83] releasing machines lock for "ha-349588", held for 28.07350849s
	I0803 23:48:37.566282  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.566552  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:37.569332  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.569715  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.569744  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.569912  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570398  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570564  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570663  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:48:37.570703  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.570820  346092 ssh_runner.go:195] Run: cat /version.json
	I0803 23:48:37.570846  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.573409  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573687  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573810  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.573834  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573988  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.574087  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.574132  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.574275  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.574307  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.574464  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.574495  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.574606  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.574618  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.574770  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.668834  346092 ssh_runner.go:195] Run: systemctl --version
	I0803 23:48:37.675150  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:48:37.834421  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:48:37.840819  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:48:37.840914  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:48:37.857582  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:48:37.857616  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:48:37.857725  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:48:37.875286  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:48:37.891205  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:48:37.891287  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:48:37.906903  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:48:37.922814  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:48:38.040844  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:48:38.187910  346092 docker.go:233] disabling docker service ...
	I0803 23:48:38.187983  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:48:38.203041  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:48:38.216953  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:48:38.356540  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:48:38.471367  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:48:38.485586  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:48:38.504903  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:48:38.504998  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.515915  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:48:38.515993  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.527084  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.538078  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.549280  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:48:38.560636  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.571734  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.589785  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.600819  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:48:38.610949  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:48:38.611026  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:48:38.624934  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:48:38.635005  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:48:38.748298  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:48:38.887792  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:48:38.887892  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:48:38.892998  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:48:38.893081  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:48:38.897088  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:48:38.935449  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:48:38.935539  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:48:38.965015  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:48:38.995381  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:48:38.996902  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:38.999775  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:39.000151  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:39.000175  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:39.000430  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:48:39.004744  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:48:39.019060  346092 kubeadm.go:883] updating cluster {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:48:39.019244  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:48:39.019542  346092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:48:39.058144  346092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 23:48:39.058231  346092 ssh_runner.go:195] Run: which lz4
	I0803 23:48:39.062491  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0803 23:48:39.062602  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 23:48:39.066958  346092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:48:39.067007  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 23:48:40.528763  346092 crio.go:462] duration metric: took 1.466185715s to copy over tarball
	I0803 23:48:40.528870  346092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:48:42.691815  346092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.162903819s)
	I0803 23:48:42.691850  346092 crio.go:469] duration metric: took 2.163033059s to extract the tarball
	I0803 23:48:42.691861  346092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:48:42.730485  346092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:48:42.778939  346092 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:48:42.778965  346092 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:48:42.778978  346092 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.30.3 crio true true} ...
	I0803 23:48:42.779117  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:48:42.779205  346092 ssh_runner.go:195] Run: crio config
	I0803 23:48:42.828670  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:42.828702  346092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:48:42.828719  346092 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:48:42.828744  346092 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-349588 NodeName:ha-349588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:48:42.828899  346092 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-349588"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:48:42.828929  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:48:42.828978  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:48:42.847620  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:48:42.847740  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:48:42.847794  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:48:42.858826  346092 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:48:42.858911  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:48:42.869873  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:48:42.888649  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:48:42.906568  346092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:48:42.924948  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0803 23:48:42.942182  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:48:42.946393  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:48:42.959877  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:48:43.090573  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:48:43.109673  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.168
	I0803 23:48:43.109707  346092 certs.go:194] generating shared ca certs ...
	I0803 23:48:43.109736  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.109935  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:48:43.109995  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:48:43.110016  346092 certs.go:256] generating profile certs ...
	I0803 23:48:43.110095  346092 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:48:43.110115  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt with IP's: []
	I0803 23:48:43.176202  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt ...
	I0803 23:48:43.176243  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt: {Name:mk8846af52ab7192f012806995ca5756c43d9aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.176414  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key ...
	I0803 23:48:43.176426  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key: {Name:mk3c59388753fea20f89d92bf03bdfc970c14c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.176505  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad
	I0803 23:48:43.176520  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.254]
	I0803 23:48:43.323353  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad ...
	I0803 23:48:43.323387  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad: {Name:mk8daf6ee6cbba709dc68563d6432752e9aeecab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.323547  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad ...
	I0803 23:48:43.323560  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad: {Name:mkbb0f47da156ebcc5042f70a6f380500f1cb64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.323633  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:48:43.323725  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:48:43.323784  346092 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:48:43.323798  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt with IP's: []
	I0803 23:48:43.488751  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt ...
	I0803 23:48:43.488786  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt: {Name:mk0d9fb306df1ed4b7eeba1f21c32111bb96f6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.488947  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key ...
	I0803 23:48:43.488958  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key: {Name:mkb23eaf410419e953894e823db49217d4b5f172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.489031  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:48:43.489065  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:48:43.489088  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:48:43.489102  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:48:43.489116  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:48:43.489130  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:48:43.489142  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:48:43.489155  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:48:43.489236  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:48:43.489273  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:48:43.489282  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:48:43.489304  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:48:43.489326  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:48:43.489346  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:48:43.489382  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:48:43.489420  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.489441  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.489453  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.490082  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:48:43.517725  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:48:43.543526  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:48:43.568742  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:48:43.594201  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:48:43.620048  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:48:43.644985  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:48:43.677146  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:48:43.703176  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:48:43.727682  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:48:43.752318  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:48:43.777458  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:48:43.794903  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:48:43.801030  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:48:43.812834  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.817558  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.817619  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.823759  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:48:43.834807  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:48:43.845824  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.850583  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.850645  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.856639  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:48:43.868327  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:48:43.880525  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.885320  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.885395  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.891760  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:48:43.906642  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:48:43.916110  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:48:43.916210  346092 kubeadm.go:392] StartCluster: {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:48:43.916326  346092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:48:43.916400  346092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:48:43.980609  346092 cri.go:89] found id: ""
	I0803 23:48:43.980701  346092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:48:43.996519  346092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:48:44.010446  346092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:48:44.024303  346092 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:48:44.024325  346092 kubeadm.go:157] found existing configuration files:
	
	I0803 23:48:44.024387  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:48:44.034689  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:48:44.034769  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:48:44.045767  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:48:44.056709  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:48:44.056781  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:48:44.067638  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:48:44.077701  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:48:44.077809  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:48:44.088461  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:48:44.098545  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:48:44.098609  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:48:44.109113  346092 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:48:44.215580  346092 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 23:48:44.215831  346092 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 23:48:44.347666  346092 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 23:48:44.347842  346092 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 23:48:44.348003  346092 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 23:48:44.559912  346092 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 23:48:44.623290  346092 out.go:204]   - Generating certificates and keys ...
	I0803 23:48:44.623399  346092 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 23:48:44.623498  346092 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 23:48:44.677872  346092 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 23:48:45.161412  346092 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 23:48:45.359698  346092 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 23:48:45.509093  346092 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 23:48:45.735838  346092 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 23:48:45.736073  346092 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-349588 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0803 23:48:45.868983  346092 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 23:48:45.869145  346092 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-349588 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0803 23:48:45.939849  346092 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 23:48:46.488546  346092 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 23:48:46.609445  346092 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 23:48:46.609584  346092 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 23:48:46.913292  346092 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 23:48:47.075011  346092 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 23:48:47.280716  346092 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 23:48:47.381369  346092 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 23:48:47.425104  346092 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 23:48:47.425771  346092 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 23:48:47.430790  346092 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 23:48:47.432462  346092 out.go:204]   - Booting up control plane ...
	I0803 23:48:47.432572  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 23:48:47.432670  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 23:48:47.432761  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 23:48:47.451505  346092 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 23:48:47.452518  346092 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 23:48:47.452610  346092 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 23:48:47.581671  346092 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 23:48:47.581778  346092 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 23:48:48.582643  346092 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001457349s
	I0803 23:48:48.582750  346092 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 23:48:54.290536  346092 kubeadm.go:310] [api-check] The API server is healthy after 5.710287165s
	I0803 23:48:54.303491  346092 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 23:48:54.323699  346092 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 23:48:54.354226  346092 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 23:48:54.354458  346092 kubeadm.go:310] [mark-control-plane] Marking the node ha-349588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 23:48:54.369768  346092 kubeadm.go:310] [bootstrap-token] Using token: vmd729.4ijgfu3uo5k2v1gw
	I0803 23:48:54.371222  346092 out.go:204]   - Configuring RBAC rules ...
	I0803 23:48:54.371383  346092 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 23:48:54.384821  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 23:48:54.400057  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 23:48:54.403731  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 23:48:54.408140  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 23:48:54.416162  346092 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 23:48:54.701480  346092 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 23:48:55.157420  346092 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 23:48:55.698866  346092 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 23:48:55.698893  346092 kubeadm.go:310] 
	I0803 23:48:55.699005  346092 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 23:48:55.699032  346092 kubeadm.go:310] 
	I0803 23:48:55.699132  346092 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 23:48:55.699142  346092 kubeadm.go:310] 
	I0803 23:48:55.699181  346092 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 23:48:55.699255  346092 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 23:48:55.699320  346092 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 23:48:55.699330  346092 kubeadm.go:310] 
	I0803 23:48:55.699410  346092 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 23:48:55.699418  346092 kubeadm.go:310] 
	I0803 23:48:55.699495  346092 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 23:48:55.699514  346092 kubeadm.go:310] 
	I0803 23:48:55.699557  346092 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 23:48:55.699622  346092 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 23:48:55.699684  346092 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 23:48:55.699690  346092 kubeadm.go:310] 
	I0803 23:48:55.699764  346092 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 23:48:55.699833  346092 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 23:48:55.699839  346092 kubeadm.go:310] 
	I0803 23:48:55.699954  346092 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vmd729.4ijgfu3uo5k2v1gw \
	I0803 23:48:55.700069  346092 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c \
	I0803 23:48:55.700090  346092 kubeadm.go:310] 	--control-plane 
	I0803 23:48:55.700094  346092 kubeadm.go:310] 
	I0803 23:48:55.700168  346092 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 23:48:55.700181  346092 kubeadm.go:310] 
	I0803 23:48:55.700261  346092 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vmd729.4ijgfu3uo5k2v1gw \
	I0803 23:48:55.700368  346092 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c 
	I0803 23:48:55.701070  346092 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 23:48:55.701096  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:55.701103  346092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:48:55.702841  346092 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0803 23:48:55.704052  346092 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0803 23:48:55.709875  346092 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0803 23:48:55.709897  346092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0803 23:48:55.735301  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0803 23:48:56.119562  346092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:48:56.119694  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588 minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=true
	I0803 23:48:56.119699  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:56.147094  346092 ops.go:34] apiserver oom_adj: -16
	I0803 23:48:56.339836  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:56.840689  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:57.339892  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:57.840550  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:58.340862  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:58.840451  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:59.339991  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:59.840655  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:00.340322  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:00.840255  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:01.340913  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:01.840915  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:02.340929  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:02.840491  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:03.339993  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:03.840488  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:04.340852  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:04.840324  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:05.339902  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:05.840734  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:06.339995  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:06.840443  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.340744  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.840499  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.945567  346092 kubeadm.go:1113] duration metric: took 11.825936372s to wait for elevateKubeSystemPrivileges
	I0803 23:49:07.945620  346092 kubeadm.go:394] duration metric: took 24.029418072s to StartCluster
	I0803 23:49:07.945649  346092 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:07.945731  346092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:49:07.946576  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:07.946821  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 23:49:07.946833  346092 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:07.946857  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:49:07.946870  346092 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:49:07.946934  346092 addons.go:69] Setting storage-provisioner=true in profile "ha-349588"
	I0803 23:49:07.946952  346092 addons.go:69] Setting default-storageclass=true in profile "ha-349588"
	I0803 23:49:07.946967  346092 addons.go:234] Setting addon storage-provisioner=true in "ha-349588"
	I0803 23:49:07.946981  346092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-349588"
	I0803 23:49:07.946996  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:07.947103  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:07.947486  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.947486  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.947519  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.947532  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.963361  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
	I0803 23:49:07.963699  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0803 23:49:07.963950  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.964258  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.964486  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.964506  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.964822  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.964843  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.964892  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.965113  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:07.965322  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.965886  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.965916  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.967448  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:49:07.967821  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:49:07.968419  346092 cert_rotation.go:137] Starting client certificate rotation controller
	I0803 23:49:07.968706  346092 addons.go:234] Setting addon default-storageclass=true in "ha-349588"
	I0803 23:49:07.968758  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:07.969038  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.969082  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.981268  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0803 23:49:07.981760  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.982406  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.982449  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.982806  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.983026  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:07.984863  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:07.984975  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0803 23:49:07.985529  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.986001  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.986021  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.986315  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.986782  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.986817  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.986999  346092 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:49:07.988373  346092 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:49:07.988390  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:49:07.988406  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:07.991634  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:07.992145  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:07.992175  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:07.992401  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:07.992624  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:07.992802  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:07.992977  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:08.004028  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0803 23:49:08.004521  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:08.004986  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:08.005008  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:08.005356  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:08.005562  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:08.007175  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:08.007434  346092 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:49:08.007453  346092 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:49:08.007479  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:08.010032  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:08.010474  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:08.010504  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:08.010667  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:08.010853  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:08.011029  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:08.011165  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:08.072504  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 23:49:08.190173  346092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:49:08.206987  346092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:49:08.632401  346092 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 23:49:09.027910  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.027947  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.027987  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028010  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028254  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028269  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028278  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028285  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028315  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028333  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028343  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028350  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028534  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028546  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028690  346092 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0803 23:49:09.028698  346092 round_trippers.go:469] Request Headers:
	I0803 23:49:09.028710  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:49:09.028714  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:49:09.028850  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028876  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028916  346092 main.go:141] libmachine: (ha-349588) DBG | Closing plugin on server side
	I0803 23:49:09.058640  346092 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0803 23:49:09.059232  346092 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0803 23:49:09.059248  346092 round_trippers.go:469] Request Headers:
	I0803 23:49:09.059257  346092 round_trippers.go:473]     Content-Type: application/json
	I0803 23:49:09.059262  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:49:09.059267  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:49:09.065033  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:49:09.065333  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.065358  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.065669  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.065690  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.067539  346092 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0803 23:49:09.068807  346092 addons.go:510] duration metric: took 1.121931039s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0803 23:49:09.068867  346092 start.go:246] waiting for cluster config update ...
	I0803 23:49:09.068886  346092 start.go:255] writing updated cluster config ...
	I0803 23:49:09.070612  346092 out.go:177] 
	I0803 23:49:09.072304  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:09.072402  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:09.074300  346092 out.go:177] * Starting "ha-349588-m02" control-plane node in "ha-349588" cluster
	I0803 23:49:09.075715  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:49:09.075749  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:49:09.075867  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:49:09.075882  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:49:09.075999  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:09.076295  346092 start.go:360] acquireMachinesLock for ha-349588-m02: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:49:09.076362  346092 start.go:364] duration metric: took 35.831µs to acquireMachinesLock for "ha-349588-m02"
	I0803 23:49:09.076384  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:09.076493  346092 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0803 23:49:09.079072  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:49:09.079194  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:09.079228  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:09.095204  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0803 23:49:09.095749  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:09.096322  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:09.096359  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:09.096849  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:09.097061  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:09.097232  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:09.097411  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:49:09.097439  346092 client.go:168] LocalClient.Create starting
	I0803 23:49:09.097476  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:49:09.097545  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:49:09.097588  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:49:09.097649  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:49:09.097670  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:49:09.097681  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:49:09.097699  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:49:09.097715  346092 main.go:141] libmachine: (ha-349588-m02) Calling .PreCreateCheck
	I0803 23:49:09.097887  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:09.098333  346092 main.go:141] libmachine: Creating machine...
	I0803 23:49:09.098349  346092 main.go:141] libmachine: (ha-349588-m02) Calling .Create
	I0803 23:49:09.098480  346092 main.go:141] libmachine: (ha-349588-m02) Creating KVM machine...
	I0803 23:49:09.099793  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found existing default KVM network
	I0803 23:49:09.099966  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found existing private KVM network mk-ha-349588
	I0803 23:49:09.100121  346092 main.go:141] libmachine: (ha-349588-m02) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 ...
	I0803 23:49:09.100150  346092 main.go:141] libmachine: (ha-349588-m02) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:49:09.100241  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.100120  346511 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:49:09.100390  346092 main.go:141] libmachine: (ha-349588-m02) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:49:09.392884  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.392736  346511 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa...
	I0803 23:49:09.506019  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.505833  346511 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/ha-349588-m02.rawdisk...
	I0803 23:49:09.506078  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Writing magic tar header
	I0803 23:49:09.506097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Writing SSH key tar header
	I0803 23:49:09.506110  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.505972  346511 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 ...
	I0803 23:49:09.506140  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02
	I0803 23:49:09.506180  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:49:09.506197  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 (perms=drwx------)
	I0803 23:49:09.506214  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:49:09.506255  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:49:09.506271  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:49:09.506284  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:49:09.506299  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:49:09.506311  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:49:09.506322  346092 main.go:141] libmachine: (ha-349588-m02) Creating domain...
	I0803 23:49:09.506337  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:49:09.506348  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:49:09.506358  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:49:09.506369  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home
	I0803 23:49:09.506379  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Skipping /home - not owner
	I0803 23:49:09.507461  346092 main.go:141] libmachine: (ha-349588-m02) define libvirt domain using xml: 
	I0803 23:49:09.507488  346092 main.go:141] libmachine: (ha-349588-m02) <domain type='kvm'>
	I0803 23:49:09.507498  346092 main.go:141] libmachine: (ha-349588-m02)   <name>ha-349588-m02</name>
	I0803 23:49:09.507513  346092 main.go:141] libmachine: (ha-349588-m02)   <memory unit='MiB'>2200</memory>
	I0803 23:49:09.507523  346092 main.go:141] libmachine: (ha-349588-m02)   <vcpu>2</vcpu>
	I0803 23:49:09.507530  346092 main.go:141] libmachine: (ha-349588-m02)   <features>
	I0803 23:49:09.507539  346092 main.go:141] libmachine: (ha-349588-m02)     <acpi/>
	I0803 23:49:09.507547  346092 main.go:141] libmachine: (ha-349588-m02)     <apic/>
	I0803 23:49:09.507555  346092 main.go:141] libmachine: (ha-349588-m02)     <pae/>
	I0803 23:49:09.507564  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.507572  346092 main.go:141] libmachine: (ha-349588-m02)   </features>
	I0803 23:49:09.507586  346092 main.go:141] libmachine: (ha-349588-m02)   <cpu mode='host-passthrough'>
	I0803 23:49:09.507596  346092 main.go:141] libmachine: (ha-349588-m02)   
	I0803 23:49:09.507604  346092 main.go:141] libmachine: (ha-349588-m02)   </cpu>
	I0803 23:49:09.507615  346092 main.go:141] libmachine: (ha-349588-m02)   <os>
	I0803 23:49:09.507625  346092 main.go:141] libmachine: (ha-349588-m02)     <type>hvm</type>
	I0803 23:49:09.507633  346092 main.go:141] libmachine: (ha-349588-m02)     <boot dev='cdrom'/>
	I0803 23:49:09.507643  346092 main.go:141] libmachine: (ha-349588-m02)     <boot dev='hd'/>
	I0803 23:49:09.507655  346092 main.go:141] libmachine: (ha-349588-m02)     <bootmenu enable='no'/>
	I0803 23:49:09.507688  346092 main.go:141] libmachine: (ha-349588-m02)   </os>
	I0803 23:49:09.507701  346092 main.go:141] libmachine: (ha-349588-m02)   <devices>
	I0803 23:49:09.507709  346092 main.go:141] libmachine: (ha-349588-m02)     <disk type='file' device='cdrom'>
	I0803 23:49:09.507752  346092 main.go:141] libmachine: (ha-349588-m02)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/boot2docker.iso'/>
	I0803 23:49:09.507779  346092 main.go:141] libmachine: (ha-349588-m02)       <target dev='hdc' bus='scsi'/>
	I0803 23:49:09.507793  346092 main.go:141] libmachine: (ha-349588-m02)       <readonly/>
	I0803 23:49:09.507804  346092 main.go:141] libmachine: (ha-349588-m02)     </disk>
	I0803 23:49:09.507816  346092 main.go:141] libmachine: (ha-349588-m02)     <disk type='file' device='disk'>
	I0803 23:49:09.507829  346092 main.go:141] libmachine: (ha-349588-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:49:09.507845  346092 main.go:141] libmachine: (ha-349588-m02)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/ha-349588-m02.rawdisk'/>
	I0803 23:49:09.507856  346092 main.go:141] libmachine: (ha-349588-m02)       <target dev='hda' bus='virtio'/>
	I0803 23:49:09.507867  346092 main.go:141] libmachine: (ha-349588-m02)     </disk>
	I0803 23:49:09.507877  346092 main.go:141] libmachine: (ha-349588-m02)     <interface type='network'>
	I0803 23:49:09.507891  346092 main.go:141] libmachine: (ha-349588-m02)       <source network='mk-ha-349588'/>
	I0803 23:49:09.507916  346092 main.go:141] libmachine: (ha-349588-m02)       <model type='virtio'/>
	I0803 23:49:09.507946  346092 main.go:141] libmachine: (ha-349588-m02)     </interface>
	I0803 23:49:09.507967  346092 main.go:141] libmachine: (ha-349588-m02)     <interface type='network'>
	I0803 23:49:09.507982  346092 main.go:141] libmachine: (ha-349588-m02)       <source network='default'/>
	I0803 23:49:09.507992  346092 main.go:141] libmachine: (ha-349588-m02)       <model type='virtio'/>
	I0803 23:49:09.508003  346092 main.go:141] libmachine: (ha-349588-m02)     </interface>
	I0803 23:49:09.508013  346092 main.go:141] libmachine: (ha-349588-m02)     <serial type='pty'>
	I0803 23:49:09.508026  346092 main.go:141] libmachine: (ha-349588-m02)       <target port='0'/>
	I0803 23:49:09.508041  346092 main.go:141] libmachine: (ha-349588-m02)     </serial>
	I0803 23:49:09.508068  346092 main.go:141] libmachine: (ha-349588-m02)     <console type='pty'>
	I0803 23:49:09.508087  346092 main.go:141] libmachine: (ha-349588-m02)       <target type='serial' port='0'/>
	I0803 23:49:09.508100  346092 main.go:141] libmachine: (ha-349588-m02)     </console>
	I0803 23:49:09.508108  346092 main.go:141] libmachine: (ha-349588-m02)     <rng model='virtio'>
	I0803 23:49:09.508122  346092 main.go:141] libmachine: (ha-349588-m02)       <backend model='random'>/dev/random</backend>
	I0803 23:49:09.508132  346092 main.go:141] libmachine: (ha-349588-m02)     </rng>
	I0803 23:49:09.508140  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.508149  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.508162  346092 main.go:141] libmachine: (ha-349588-m02)   </devices>
	I0803 23:49:09.508174  346092 main.go:141] libmachine: (ha-349588-m02) </domain>
	I0803 23:49:09.508186  346092 main.go:141] libmachine: (ha-349588-m02) 
	I0803 23:49:09.515269  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:7a:63:bd in network default
	I0803 23:49:09.515943  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring networks are active...
	I0803 23:49:09.515967  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:09.516868  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring network default is active
	I0803 23:49:09.517308  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring network mk-ha-349588 is active
	I0803 23:49:09.517724  346092 main.go:141] libmachine: (ha-349588-m02) Getting domain xml...
	I0803 23:49:09.518644  346092 main.go:141] libmachine: (ha-349588-m02) Creating domain...
	I0803 23:49:10.755347  346092 main.go:141] libmachine: (ha-349588-m02) Waiting to get IP...
	I0803 23:49:10.756208  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:10.756617  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:10.756647  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:10.756592  346511 retry.go:31] will retry after 196.457708ms: waiting for machine to come up
	I0803 23:49:10.955123  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:10.955579  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:10.955605  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:10.955538  346511 retry.go:31] will retry after 314.513004ms: waiting for machine to come up
	I0803 23:49:11.272300  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:11.272803  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:11.272840  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:11.272737  346511 retry.go:31] will retry after 311.291518ms: waiting for machine to come up
	I0803 23:49:11.585254  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:11.585799  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:11.585830  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:11.585723  346511 retry.go:31] will retry after 523.229806ms: waiting for machine to come up
	I0803 23:49:12.110649  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:12.111090  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:12.111117  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:12.111031  346511 retry.go:31] will retry after 594.349932ms: waiting for machine to come up
	I0803 23:49:12.706604  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:12.707015  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:12.707046  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:12.706950  346511 retry.go:31] will retry after 579.421708ms: waiting for machine to come up
	I0803 23:49:13.287722  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:13.288146  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:13.288173  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:13.288098  346511 retry.go:31] will retry after 832.78526ms: waiting for machine to come up
	I0803 23:49:14.122636  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:14.123072  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:14.123097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:14.123032  346511 retry.go:31] will retry after 1.40942689s: waiting for machine to come up
	I0803 23:49:15.534952  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:15.535443  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:15.535483  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:15.535391  346511 retry.go:31] will retry after 1.773682348s: waiting for machine to come up
	I0803 23:49:17.310303  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:17.310693  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:17.310720  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:17.310649  346511 retry.go:31] will retry after 2.230324158s: waiting for machine to come up
	I0803 23:49:19.542820  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:19.543326  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:19.543357  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:19.543274  346511 retry.go:31] will retry after 2.161656606s: waiting for machine to come up
	I0803 23:49:21.706940  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:21.707447  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:21.707472  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:21.707412  346511 retry.go:31] will retry after 2.578584432s: waiting for machine to come up
	I0803 23:49:24.287397  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:24.287819  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:24.287849  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:24.287767  346511 retry.go:31] will retry after 3.341759682s: waiting for machine to come up
	I0803 23:49:27.633275  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:27.633768  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:27.634003  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:27.633715  346511 retry.go:31] will retry after 4.956950166s: waiting for machine to come up
	I0803 23:49:32.592015  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.592451  346092 main.go:141] libmachine: (ha-349588-m02) Found IP for machine: 192.168.39.67
	I0803 23:49:32.592471  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.592477  346092 main.go:141] libmachine: (ha-349588-m02) Reserving static IP address...
	I0803 23:49:32.592850  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find host DHCP lease matching {name: "ha-349588-m02", mac: "52:54:00:c5:a2:30", ip: "192.168.39.67"} in network mk-ha-349588
	I0803 23:49:32.671097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Getting to WaitForSSH function...
	I0803 23:49:32.671127  346092 main.go:141] libmachine: (ha-349588-m02) Reserved static IP address: 192.168.39.67
	I0803 23:49:32.671140  346092 main.go:141] libmachine: (ha-349588-m02) Waiting for SSH to be available...
	I0803 23:49:32.674109  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.674564  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.674591  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.674755  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using SSH client type: external
	I0803 23:49:32.674775  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa (-rw-------)
	I0803 23:49:32.674832  346092 main.go:141] libmachine: (ha-349588-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:49:32.674867  346092 main.go:141] libmachine: (ha-349588-m02) DBG | About to run SSH command:
	I0803 23:49:32.674884  346092 main.go:141] libmachine: (ha-349588-m02) DBG | exit 0
	I0803 23:49:32.797788  346092 main.go:141] libmachine: (ha-349588-m02) DBG | SSH cmd err, output: <nil>: 
	I0803 23:49:32.798082  346092 main.go:141] libmachine: (ha-349588-m02) KVM machine creation complete!
	I0803 23:49:32.798388  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:32.798971  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:32.799188  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:32.799417  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:49:32.799435  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:49:32.800912  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:49:32.800962  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:49:32.800974  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:49:32.800983  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:32.803140  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.803577  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.803607  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.803746  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:32.803953  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.804142  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.804329  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:32.804496  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:32.804723  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:32.804734  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:49:32.905099  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:49:32.905127  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:49:32.905135  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:32.908228  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.908609  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.908632  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.908801  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:32.909041  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.909218  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.909340  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:32.909571  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:32.909828  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:32.909842  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:49:33.015154  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:49:33.015233  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:49:33.015243  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:49:33.015251  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.015518  346092 buildroot.go:166] provisioning hostname "ha-349588-m02"
	I0803 23:49:33.015547  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.015814  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.018501  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.018958  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.018992  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.019139  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.019322  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.019503  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.019644  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.019793  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.019965  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.019979  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588-m02 && echo "ha-349588-m02" | sudo tee /etc/hostname
	I0803 23:49:33.137434  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588-m02
	
	I0803 23:49:33.137463  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.140293  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.140675  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.140702  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.140906  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.141108  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.141288  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.141456  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.141647  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.141861  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.141889  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:49:33.255392  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:49:33.255436  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:49:33.255459  346092 buildroot.go:174] setting up certificates
	I0803 23:49:33.255472  346092 provision.go:84] configureAuth start
	I0803 23:49:33.255487  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.255787  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:33.258331  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.258649  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.258678  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.258872  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.260786  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.261093  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.261133  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.261245  346092 provision.go:143] copyHostCerts
	I0803 23:49:33.261291  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:49:33.261335  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:49:33.261348  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:49:33.261441  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:49:33.261586  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:49:33.261610  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:49:33.261617  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:49:33.261649  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:49:33.261693  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:49:33.261709  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:49:33.261715  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:49:33.261736  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:49:33.261796  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588-m02 san=[127.0.0.1 192.168.39.67 ha-349588-m02 localhost minikube]
	I0803 23:49:33.513319  346092 provision.go:177] copyRemoteCerts
	I0803 23:49:33.513401  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:49:33.513438  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.516462  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.516819  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.516856  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.517011  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.517238  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.517393  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.517618  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:33.600517  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:49:33.600605  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:49:33.630035  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:49:33.630119  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:49:33.659132  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:49:33.659199  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:49:33.693181  346092 provision.go:87] duration metric: took 437.692464ms to configureAuth
	I0803 23:49:33.693210  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:49:33.693426  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:33.693563  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.696446  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.696934  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.696969  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.697212  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.697497  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.697727  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.698031  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.698216  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.698403  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.698424  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:49:33.959540  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:49:33.959581  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:49:33.959593  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetURL
	I0803 23:49:33.960929  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using libvirt version 6000000
	I0803 23:49:33.963512  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.963899  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.963929  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.964106  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:49:33.964121  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:49:33.964130  346092 client.go:171] duration metric: took 24.866682664s to LocalClient.Create
	I0803 23:49:33.964160  346092 start.go:167] duration metric: took 24.866749901s to libmachine.API.Create "ha-349588"
	I0803 23:49:33.964172  346092 start.go:293] postStartSetup for "ha-349588-m02" (driver="kvm2")
	I0803 23:49:33.964187  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:49:33.964221  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:33.964513  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:49:33.964545  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.966907  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.967233  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.967264  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.967403  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.967604  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.967771  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.967915  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.048442  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:49:34.052707  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:49:34.052737  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:49:34.052818  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:49:34.052927  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:49:34.052941  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:49:34.053050  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:49:34.063171  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:49:34.087600  346092 start.go:296] duration metric: took 123.413254ms for postStartSetup
	I0803 23:49:34.087662  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:34.088269  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:34.091450  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.091855  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.091886  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.092198  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:34.092466  346092 start.go:128] duration metric: took 25.015961226s to createHost
	I0803 23:49:34.092491  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:34.094844  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.095181  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.095213  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.095338  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.095493  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.095609  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.095704  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.095817  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:34.096032  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:34.096044  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:49:34.198875  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728974.174689570
	
	I0803 23:49:34.198914  346092 fix.go:216] guest clock: 1722728974.174689570
	I0803 23:49:34.198924  346092 fix.go:229] Guest: 2024-08-03 23:49:34.17468957 +0000 UTC Remote: 2024-08-03 23:49:34.092479911 +0000 UTC m=+84.709898585 (delta=82.209659ms)
	I0803 23:49:34.198942  346092 fix.go:200] guest clock delta is within tolerance: 82.209659ms
	I0803 23:49:34.198947  346092 start.go:83] releasing machines lock for "ha-349588-m02", held for 25.122576839s
	I0803 23:49:34.198968  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.199261  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:34.202135  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.202517  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.202542  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.204728  346092 out.go:177] * Found network options:
	I0803 23:49:34.206084  346092 out.go:177]   - NO_PROXY=192.168.39.168
	W0803 23:49:34.207413  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:49:34.207457  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208191  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208417  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208530  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:49:34.208577  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	W0803 23:49:34.208720  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:49:34.208822  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:49:34.208847  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:34.211895  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.211921  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212318  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.212350  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.212374  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212388  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212549  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.212667  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.212745  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.212837  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.212868  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.212971  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.213014  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.213205  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.447882  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:49:34.454498  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:49:34.454573  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:49:34.471269  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:49:34.471295  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:49:34.471359  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:49:34.488153  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:49:34.503703  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:49:34.503780  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:49:34.518917  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:49:34.534021  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:49:34.650977  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:49:34.798998  346092 docker.go:233] disabling docker service ...
	I0803 23:49:34.799082  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:49:34.814209  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:49:34.828385  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:49:34.970942  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:49:35.098562  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:49:35.113143  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:49:35.132472  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:49:35.132547  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.143835  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:49:35.143943  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.155440  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.167348  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.178602  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:49:35.190200  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.201259  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.220070  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.231425  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:49:35.241477  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:49:35.241554  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:49:35.255830  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:49:35.265769  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:49:35.386227  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:49:35.520835  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:49:35.520906  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:49:35.526284  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:49:35.526385  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:49:35.530619  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:49:35.572868  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:49:35.572976  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:49:35.602425  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:49:35.633811  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:49:35.634957  346092 out.go:177]   - env NO_PROXY=192.168.39.168
	I0803 23:49:35.636018  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:35.638807  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:35.639117  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:35.639150  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:35.639321  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:49:35.643666  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:49:35.656809  346092 mustload.go:65] Loading cluster: ha-349588
	I0803 23:49:35.657051  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:35.657322  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:35.657356  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:35.673016  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0803 23:49:35.673600  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:35.674137  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:35.674163  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:35.674524  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:35.674729  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:35.676374  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:35.676762  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:35.676801  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:35.692471  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0803 23:49:35.692902  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:35.693448  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:35.693472  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:35.693833  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:35.694035  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:35.694192  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.67
	I0803 23:49:35.694204  346092 certs.go:194] generating shared ca certs ...
	I0803 23:49:35.694223  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.694426  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:49:35.694494  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:49:35.694507  346092 certs.go:256] generating profile certs ...
	I0803 23:49:35.694605  346092 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:49:35.694640  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71
	I0803 23:49:35.694659  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.254]
	I0803 23:49:35.917497  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 ...
	I0803 23:49:35.917544  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71: {Name:mke951f82b9c8987c94f55cf17d3747067a5c196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.917758  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71 ...
	I0803 23:49:35.917774  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71: {Name:mk02505c8ddb9ca87fb327815bc5ef9322277b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.917868  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:49:35.918007  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:49:35.918138  346092 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:49:35.918156  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:49:35.918172  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:49:35.918186  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:49:35.918202  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:49:35.918215  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:49:35.918227  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:49:35.918239  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:49:35.918251  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:49:35.918299  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:49:35.918329  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:49:35.918338  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:49:35.918360  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:49:35.918384  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:49:35.918404  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:49:35.918441  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:49:35.918466  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:49:35.918480  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:49:35.918490  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:35.918524  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:35.921612  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:35.922060  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:35.922086  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:35.922268  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:35.922528  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:35.922696  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:35.922862  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:35.993968  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:49:35.999189  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:49:36.012984  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:49:36.017711  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0803 23:49:36.030643  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:49:36.035338  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:49:36.046723  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:49:36.051542  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:49:36.063687  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:49:36.068111  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:49:36.079721  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:49:36.084574  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:49:36.097214  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:49:36.123871  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:49:36.150504  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:49:36.176044  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:49:36.201207  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0803 23:49:36.226385  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:49:36.251236  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:49:36.276179  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:49:36.300943  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:49:36.326134  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:49:36.351169  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:49:36.376713  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:49:36.394851  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0803 23:49:36.412839  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:49:36.429967  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:49:36.447330  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:49:36.465224  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:49:36.484948  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:49:36.504399  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:49:36.510609  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:49:36.522136  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.527204  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.527286  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.533370  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:49:36.545165  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:49:36.557561  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.562446  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.562514  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.568768  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:49:36.580483  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:49:36.592809  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.598103  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.598176  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.604409  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:49:36.616058  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:49:36.620743  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:49:36.620797  346092 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0803 23:49:36.620884  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:49:36.620910  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:49:36.620954  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:49:36.638507  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:49:36.638607  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:49:36.638675  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:49:36.649343  346092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:49:36.649405  346092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:49:36.661547  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:49:36.661580  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:49:36.661674  346092 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0803 23:49:36.661683  346092 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0803 23:49:36.661689  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:49:36.666174  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:49:36.666213  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:49:37.652314  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:49:37.669824  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:49:37.669973  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:49:37.674869  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:49:37.674913  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:49:39.850597  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:49:39.850682  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:49:39.855926  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:49:39.855963  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:49:40.108212  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:49:40.119630  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:49:40.137422  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:49:40.154245  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:49:40.171440  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:49:40.175867  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:49:40.189078  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:49:40.320235  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:49:40.338074  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:40.338438  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:40.338480  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:40.354264  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0803 23:49:40.354726  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:40.355217  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:40.355240  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:40.355602  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:40.355803  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:40.355978  346092 start.go:317] joinCluster: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:49:40.356124  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:49:40.356168  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:40.359343  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:40.359840  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:40.359873  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:40.360005  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:40.360235  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:40.360419  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:40.360578  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:40.515090  346092 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:40.515146  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wojua7.acwoc7lubp2sjzye --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I0803 23:50:02.946975  346092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wojua7.acwoc7lubp2sjzye --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (22.431790037s)
	I0803 23:50:02.947019  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:50:03.446614  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588-m02 minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=false
	I0803 23:50:03.609385  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-349588-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:50:03.735531  346092 start.go:319] duration metric: took 23.379547866s to joinCluster
	I0803 23:50:03.735696  346092 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:03.735969  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:03.737123  346092 out.go:177] * Verifying Kubernetes components...
	I0803 23:50:03.738194  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:04.021548  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:50:04.087375  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:50:04.087738  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:50:04.087825  346092 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.168:8443
	I0803 23:50:04.088155  346092 node_ready.go:35] waiting up to 6m0s for node "ha-349588-m02" to be "Ready" ...
	I0803 23:50:04.088267  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:04.088279  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:04.088289  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:04.088293  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:04.101608  346092 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0803 23:50:04.588793  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:04.588824  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:04.588835  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:04.588842  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:04.598391  346092 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:50:05.088603  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:05.088654  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:05.088667  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:05.088672  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:05.101983  346092 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0803 23:50:05.588487  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:05.588512  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:05.588520  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:05.588524  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:05.594895  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:50:06.088958  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:06.088993  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:06.089006  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:06.089013  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:06.093121  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:06.093903  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:06.589007  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:06.589034  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:06.589043  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:06.589049  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:06.592751  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:07.088728  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:07.088755  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:07.088765  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:07.088769  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:07.092603  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:07.588467  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:07.588494  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:07.588503  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:07.588507  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:07.592369  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:08.088806  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:08.088832  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:08.088842  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:08.088847  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:08.341203  346092 round_trippers.go:574] Response Status: 200 OK in 252 milliseconds
	I0803 23:50:08.341873  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:08.588741  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:08.588769  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:08.588784  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:08.588791  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:08.593035  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:09.088693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:09.088718  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:09.088727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:09.088730  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:09.092569  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:09.589160  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:09.589184  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:09.589193  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:09.589196  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:09.593032  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:10.088752  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:10.088778  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:10.088786  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:10.088790  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:10.093525  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:10.588993  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:10.589018  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:10.589026  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:10.589030  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:10.593009  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:10.593688  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:11.089324  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:11.089353  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:11.089364  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:11.089369  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:11.092666  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:11.588626  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:11.588652  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:11.588661  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:11.588665  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:11.592229  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:12.088688  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:12.088717  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:12.088728  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:12.088735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:12.092442  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:12.588482  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:12.588508  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:12.588519  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:12.588525  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:12.592800  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:13.089159  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:13.089183  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:13.089192  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:13.089197  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:13.093008  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:13.093622  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:13.588880  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:13.588905  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:13.588914  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:13.588920  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:13.592486  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:14.088375  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:14.088400  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:14.088409  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:14.088413  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:14.092385  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:14.588835  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:14.588868  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:14.588877  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:14.588881  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:14.592420  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:15.089024  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:15.089047  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:15.089057  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:15.089061  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:15.094639  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:15.095247  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:15.589208  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:15.589235  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:15.589249  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:15.589255  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:15.592801  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:16.089093  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:16.089119  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:16.089127  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:16.089132  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:16.093269  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:16.588406  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:16.588436  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:16.588445  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:16.588448  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:16.592508  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:17.088496  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:17.088524  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:17.088532  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:17.088537  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:17.092475  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:17.588414  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:17.588441  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:17.588450  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:17.588454  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:17.591618  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:17.592274  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:18.088769  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:18.088799  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:18.088810  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:18.088815  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:18.092770  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:18.588849  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:18.588880  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:18.588891  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:18.588896  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:18.596556  346092 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:50:19.089425  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:19.089451  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:19.089460  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:19.089465  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:19.093449  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:19.588574  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:19.588600  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:19.588608  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:19.588611  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:19.592387  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:19.593033  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:20.089327  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.089353  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.089365  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.089371  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.092714  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.093273  346092 node_ready.go:49] node "ha-349588-m02" has status "Ready":"True"
	I0803 23:50:20.093308  346092 node_ready.go:38] duration metric: took 16.005118223s for node "ha-349588-m02" to be "Ready" ...
	I0803 23:50:20.093320  346092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:50:20.093433  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:20.093448  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.093462  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.093469  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.100038  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:50:20.105890  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.105983  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fzmtg
	I0803 23:50:20.105993  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.106000  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.106006  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.109203  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.110030  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.110048  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.110059  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.110065  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.113001  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.113519  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.113536  346092 pod_ready.go:81] duration metric: took 7.615549ms for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.113545  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.113609  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8qt6
	I0803 23:50:20.113616  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.113623  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.113630  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.116416  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.117033  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.117048  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.117055  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.117058  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.119946  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.120560  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.120579  346092 pod_ready.go:81] duration metric: took 7.024999ms for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.120591  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.120656  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588
	I0803 23:50:20.120666  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.120676  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.120683  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.122841  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.123473  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.123488  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.123495  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.123500  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.125721  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.126110  346092 pod_ready.go:92] pod "etcd-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.126125  346092 pod_ready.go:81] duration metric: took 5.526947ms for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.126134  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.126181  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m02
	I0803 23:50:20.126188  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.126194  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.126198  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.128291  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.128736  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.128749  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.128756  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.128759  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.130925  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.131570  346092 pod_ready.go:92] pod "etcd-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.131587  346092 pod_ready.go:81] duration metric: took 5.446889ms for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.131599  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.290056  346092 request.go:629] Waited for 158.368975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:50:20.290126  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:50:20.290132  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.290140  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.290145  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.293571  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.489637  346092 request.go:629] Waited for 195.390522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.489707  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.489712  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.489720  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.489725  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.492984  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.493434  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.493454  346092 pod_ready.go:81] duration metric: took 361.848401ms for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.493467  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.689358  346092 request.go:629] Waited for 195.783846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:50:20.689430  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:50:20.689438  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.689456  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.689465  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.693618  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:20.889724  346092 request.go:629] Waited for 195.310872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.889787  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.889792  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.889800  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.889804  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.893536  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.894217  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.894238  346092 pod_ready.go:81] duration metric: took 400.764562ms for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.894248  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.090421  346092 request.go:629] Waited for 196.080925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:50:21.090509  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:50:21.090516  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.090534  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.090543  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.094083  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.290154  346092 request.go:629] Waited for 195.368168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:21.290234  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:21.290238  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.290246  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.290250  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.293525  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.294206  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:21.294224  346092 pod_ready.go:81] duration metric: took 399.970486ms for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.294234  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.490360  346092 request.go:629] Waited for 196.055949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:50:21.490451  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:50:21.490456  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.490465  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.490468  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.494025  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.690145  346092 request.go:629] Waited for 195.384727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:21.690220  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:21.690228  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.690240  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.690248  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.693529  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.694169  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:21.694192  346092 pod_ready.go:81] duration metric: took 399.951921ms for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.694202  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.890179  346092 request.go:629] Waited for 195.854387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:50:21.890259  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:50:21.890265  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.890274  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.890279  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.893716  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.089910  346092 request.go:629] Waited for 195.371972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.090002  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.090008  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.090016  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.090027  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.093738  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.094479  346092 pod_ready.go:92] pod "kube-proxy-bbzdt" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.094500  346092 pod_ready.go:81] duration metric: took 400.291536ms for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.094509  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.289594  346092 request.go:629] Waited for 195.006002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:50:22.289686  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:50:22.289694  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.289702  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.289707  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.293120  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.490210  346092 request.go:629] Waited for 196.420033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:22.490278  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:22.490283  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.490291  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.490294  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.493680  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.494297  346092 pod_ready.go:92] pod "kube-proxy-gbg5q" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.494322  346092 pod_ready.go:81] duration metric: took 399.806171ms for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.494332  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.689376  346092 request.go:629] Waited for 194.960002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:50:22.689464  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:50:22.689470  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.689478  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.689482  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.693104  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.890297  346092 request.go:629] Waited for 196.361011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.890391  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.890399  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.890411  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.890420  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.893697  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.894373  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.894394  346092 pod_ready.go:81] duration metric: took 400.055147ms for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.894407  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:23.089430  346092 request.go:629] Waited for 194.917023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:50:23.089499  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:50:23.089515  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.089526  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.089531  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.093012  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.290011  346092 request.go:629] Waited for 196.376685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:23.290074  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:23.290079  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.290087  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.290094  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.293439  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.293983  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:23.294011  346092 pod_ready.go:81] duration metric: took 399.595842ms for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:23.294023  346092 pod_ready.go:38] duration metric: took 3.200674416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:50:23.294047  346092 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:50:23.294103  346092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:50:23.311925  346092 api_server.go:72] duration metric: took 19.57618176s to wait for apiserver process to appear ...
	I0803 23:50:23.311959  346092 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:50:23.311986  346092 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0803 23:50:23.316404  346092 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0803 23:50:23.316479  346092 round_trippers.go:463] GET https://192.168.39.168:8443/version
	I0803 23:50:23.316488  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.316496  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.316500  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.317421  346092 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:50:23.317614  346092 api_server.go:141] control plane version: v1.30.3
	I0803 23:50:23.317642  346092 api_server.go:131] duration metric: took 5.676569ms to wait for apiserver health ...
	I0803 23:50:23.317651  346092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:50:23.489822  346092 request.go:629] Waited for 172.098571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.489889  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.489894  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.489904  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.489909  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.495307  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:23.500631  346092 system_pods.go:59] 17 kube-system pods found
	I0803 23:50:23.500677  346092 system_pods.go:61] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:50:23.500684  346092 system_pods.go:61] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:50:23.500688  346092 system_pods.go:61] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:50:23.500691  346092 system_pods.go:61] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:50:23.500698  346092 system_pods.go:61] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:50:23.500701  346092 system_pods.go:61] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:50:23.500704  346092 system_pods.go:61] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:50:23.500708  346092 system_pods.go:61] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:50:23.500713  346092 system_pods.go:61] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:50:23.500718  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:50:23.500722  346092 system_pods.go:61] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:50:23.500727  346092 system_pods.go:61] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:50:23.500731  346092 system_pods.go:61] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:50:23.500736  346092 system_pods.go:61] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:50:23.500744  346092 system_pods.go:61] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:50:23.500748  346092 system_pods.go:61] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:50:23.500760  346092 system_pods.go:61] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:50:23.500774  346092 system_pods.go:74] duration metric: took 183.114377ms to wait for pod list to return data ...
	I0803 23:50:23.500785  346092 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:50:23.690294  346092 request.go:629] Waited for 189.403835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:50:23.690372  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:50:23.690379  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.690387  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.690390  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.693867  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.694134  346092 default_sa.go:45] found service account: "default"
	I0803 23:50:23.694155  346092 default_sa.go:55] duration metric: took 193.358105ms for default service account to be created ...
	I0803 23:50:23.694165  346092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:50:23.889572  346092 request.go:629] Waited for 195.298844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.889643  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.889648  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.889656  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.889667  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.895176  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:23.899132  346092 system_pods.go:86] 17 kube-system pods found
	I0803 23:50:23.899162  346092 system_pods.go:89] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:50:23.899168  346092 system_pods.go:89] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:50:23.899173  346092 system_pods.go:89] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:50:23.899177  346092 system_pods.go:89] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:50:23.899180  346092 system_pods.go:89] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:50:23.899184  346092 system_pods.go:89] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:50:23.899188  346092 system_pods.go:89] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:50:23.899191  346092 system_pods.go:89] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:50:23.899196  346092 system_pods.go:89] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:50:23.899199  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:50:23.899203  346092 system_pods.go:89] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:50:23.899206  346092 system_pods.go:89] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:50:23.899210  346092 system_pods.go:89] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:50:23.899214  346092 system_pods.go:89] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:50:23.899218  346092 system_pods.go:89] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:50:23.899221  346092 system_pods.go:89] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:50:23.899224  346092 system_pods.go:89] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:50:23.899232  346092 system_pods.go:126] duration metric: took 205.059563ms to wait for k8s-apps to be running ...
	I0803 23:50:23.899241  346092 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:50:23.899289  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:50:23.918586  346092 system_svc.go:56] duration metric: took 19.330492ms WaitForService to wait for kubelet
	I0803 23:50:23.918619  346092 kubeadm.go:582] duration metric: took 20.182883458s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:50:23.918639  346092 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:50:24.090107  346092 request.go:629] Waited for 171.389393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes
	I0803 23:50:24.090194  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes
	I0803 23:50:24.090202  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:24.090213  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:24.090218  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:24.094436  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:24.095386  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:50:24.095419  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:50:24.095433  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:50:24.095439  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:50:24.095445  346092 node_conditions.go:105] duration metric: took 176.80069ms to run NodePressure ...
	I0803 23:50:24.095460  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:50:24.095497  346092 start.go:255] writing updated cluster config ...
	I0803 23:50:24.097766  346092 out.go:177] 
	I0803 23:50:24.099166  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:24.099285  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:24.101394  346092 out.go:177] * Starting "ha-349588-m03" control-plane node in "ha-349588" cluster
	I0803 23:50:24.102673  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:50:24.102710  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:50:24.102810  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:50:24.102821  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:50:24.102925  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:24.103193  346092 start.go:360] acquireMachinesLock for ha-349588-m03: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:50:24.103240  346092 start.go:364] duration metric: took 27.2µs to acquireMachinesLock for "ha-349588-m03"
	I0803 23:50:24.103261  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:24.103356  346092 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0803 23:50:24.104783  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:50:24.104893  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:24.104933  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:24.121746  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0803 23:50:24.122292  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:24.122833  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:24.122857  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:24.123219  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:24.123424  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:24.123599  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:24.123792  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:50:24.123823  346092 client.go:168] LocalClient.Create starting
	I0803 23:50:24.123860  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:50:24.123907  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:50:24.123930  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:50:24.124006  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:50:24.124033  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:50:24.124049  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:50:24.124078  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:50:24.124089  346092 main.go:141] libmachine: (ha-349588-m03) Calling .PreCreateCheck
	I0803 23:50:24.124263  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:24.124674  346092 main.go:141] libmachine: Creating machine...
	I0803 23:50:24.124688  346092 main.go:141] libmachine: (ha-349588-m03) Calling .Create
	I0803 23:50:24.124837  346092 main.go:141] libmachine: (ha-349588-m03) Creating KVM machine...
	I0803 23:50:24.126236  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found existing default KVM network
	I0803 23:50:24.126409  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found existing private KVM network mk-ha-349588
	I0803 23:50:24.126593  346092 main.go:141] libmachine: (ha-349588-m03) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 ...
	I0803 23:50:24.126610  346092 main.go:141] libmachine: (ha-349588-m03) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:50:24.126756  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.126596  346924 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:50:24.126786  346092 main.go:141] libmachine: (ha-349588-m03) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:50:24.399033  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.398884  346924 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa...
	I0803 23:50:24.516914  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.516772  346924 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/ha-349588-m03.rawdisk...
	I0803 23:50:24.516953  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Writing magic tar header
	I0803 23:50:24.516982  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Writing SSH key tar header
	I0803 23:50:24.516996  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.516895  346924 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 ...
	I0803 23:50:24.517016  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03
	I0803 23:50:24.517113  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:50:24.517143  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 (perms=drwx------)
	I0803 23:50:24.517162  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:50:24.517179  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:50:24.517197  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:50:24.517211  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:50:24.517226  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:50:24.517243  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:50:24.517254  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:50:24.517267  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home
	I0803 23:50:24.517278  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Skipping /home - not owner
	I0803 23:50:24.517290  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:50:24.517307  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:50:24.517318  346092 main.go:141] libmachine: (ha-349588-m03) Creating domain...
	I0803 23:50:24.518387  346092 main.go:141] libmachine: (ha-349588-m03) define libvirt domain using xml: 
	I0803 23:50:24.518416  346092 main.go:141] libmachine: (ha-349588-m03) <domain type='kvm'>
	I0803 23:50:24.518427  346092 main.go:141] libmachine: (ha-349588-m03)   <name>ha-349588-m03</name>
	I0803 23:50:24.518438  346092 main.go:141] libmachine: (ha-349588-m03)   <memory unit='MiB'>2200</memory>
	I0803 23:50:24.518445  346092 main.go:141] libmachine: (ha-349588-m03)   <vcpu>2</vcpu>
	I0803 23:50:24.518453  346092 main.go:141] libmachine: (ha-349588-m03)   <features>
	I0803 23:50:24.518464  346092 main.go:141] libmachine: (ha-349588-m03)     <acpi/>
	I0803 23:50:24.518474  346092 main.go:141] libmachine: (ha-349588-m03)     <apic/>
	I0803 23:50:24.518485  346092 main.go:141] libmachine: (ha-349588-m03)     <pae/>
	I0803 23:50:24.518498  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.518509  346092 main.go:141] libmachine: (ha-349588-m03)   </features>
	I0803 23:50:24.518523  346092 main.go:141] libmachine: (ha-349588-m03)   <cpu mode='host-passthrough'>
	I0803 23:50:24.518533  346092 main.go:141] libmachine: (ha-349588-m03)   
	I0803 23:50:24.518543  346092 main.go:141] libmachine: (ha-349588-m03)   </cpu>
	I0803 23:50:24.518553  346092 main.go:141] libmachine: (ha-349588-m03)   <os>
	I0803 23:50:24.518563  346092 main.go:141] libmachine: (ha-349588-m03)     <type>hvm</type>
	I0803 23:50:24.518575  346092 main.go:141] libmachine: (ha-349588-m03)     <boot dev='cdrom'/>
	I0803 23:50:24.518584  346092 main.go:141] libmachine: (ha-349588-m03)     <boot dev='hd'/>
	I0803 23:50:24.518594  346092 main.go:141] libmachine: (ha-349588-m03)     <bootmenu enable='no'/>
	I0803 23:50:24.518607  346092 main.go:141] libmachine: (ha-349588-m03)   </os>
	I0803 23:50:24.518618  346092 main.go:141] libmachine: (ha-349588-m03)   <devices>
	I0803 23:50:24.518629  346092 main.go:141] libmachine: (ha-349588-m03)     <disk type='file' device='cdrom'>
	I0803 23:50:24.518647  346092 main.go:141] libmachine: (ha-349588-m03)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/boot2docker.iso'/>
	I0803 23:50:24.518657  346092 main.go:141] libmachine: (ha-349588-m03)       <target dev='hdc' bus='scsi'/>
	I0803 23:50:24.518670  346092 main.go:141] libmachine: (ha-349588-m03)       <readonly/>
	I0803 23:50:24.518683  346092 main.go:141] libmachine: (ha-349588-m03)     </disk>
	I0803 23:50:24.518726  346092 main.go:141] libmachine: (ha-349588-m03)     <disk type='file' device='disk'>
	I0803 23:50:24.518753  346092 main.go:141] libmachine: (ha-349588-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:50:24.518781  346092 main.go:141] libmachine: (ha-349588-m03)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/ha-349588-m03.rawdisk'/>
	I0803 23:50:24.518793  346092 main.go:141] libmachine: (ha-349588-m03)       <target dev='hda' bus='virtio'/>
	I0803 23:50:24.518802  346092 main.go:141] libmachine: (ha-349588-m03)     </disk>
	I0803 23:50:24.518813  346092 main.go:141] libmachine: (ha-349588-m03)     <interface type='network'>
	I0803 23:50:24.518823  346092 main.go:141] libmachine: (ha-349588-m03)       <source network='mk-ha-349588'/>
	I0803 23:50:24.518836  346092 main.go:141] libmachine: (ha-349588-m03)       <model type='virtio'/>
	I0803 23:50:24.518878  346092 main.go:141] libmachine: (ha-349588-m03)     </interface>
	I0803 23:50:24.518907  346092 main.go:141] libmachine: (ha-349588-m03)     <interface type='network'>
	I0803 23:50:24.518942  346092 main.go:141] libmachine: (ha-349588-m03)       <source network='default'/>
	I0803 23:50:24.518960  346092 main.go:141] libmachine: (ha-349588-m03)       <model type='virtio'/>
	I0803 23:50:24.518970  346092 main.go:141] libmachine: (ha-349588-m03)     </interface>
	I0803 23:50:24.518978  346092 main.go:141] libmachine: (ha-349588-m03)     <serial type='pty'>
	I0803 23:50:24.518990  346092 main.go:141] libmachine: (ha-349588-m03)       <target port='0'/>
	I0803 23:50:24.519000  346092 main.go:141] libmachine: (ha-349588-m03)     </serial>
	I0803 23:50:24.519008  346092 main.go:141] libmachine: (ha-349588-m03)     <console type='pty'>
	I0803 23:50:24.519020  346092 main.go:141] libmachine: (ha-349588-m03)       <target type='serial' port='0'/>
	I0803 23:50:24.519029  346092 main.go:141] libmachine: (ha-349588-m03)     </console>
	I0803 23:50:24.519045  346092 main.go:141] libmachine: (ha-349588-m03)     <rng model='virtio'>
	I0803 23:50:24.519059  346092 main.go:141] libmachine: (ha-349588-m03)       <backend model='random'>/dev/random</backend>
	I0803 23:50:24.519065  346092 main.go:141] libmachine: (ha-349588-m03)     </rng>
	I0803 23:50:24.519077  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.519084  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.519098  346092 main.go:141] libmachine: (ha-349588-m03)   </devices>
	I0803 23:50:24.519106  346092 main.go:141] libmachine: (ha-349588-m03) </domain>
	I0803 23:50:24.519124  346092 main.go:141] libmachine: (ha-349588-m03) 
	I0803 23:50:24.526713  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:ab:a3:ea in network default
	I0803 23:50:24.527228  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring networks are active...
	I0803 23:50:24.527253  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:24.527861  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring network default is active
	I0803 23:50:24.528200  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring network mk-ha-349588 is active
	I0803 23:50:24.528499  346092 main.go:141] libmachine: (ha-349588-m03) Getting domain xml...
	I0803 23:50:24.529299  346092 main.go:141] libmachine: (ha-349588-m03) Creating domain...
	I0803 23:50:25.809639  346092 main.go:141] libmachine: (ha-349588-m03) Waiting to get IP...
	I0803 23:50:25.810693  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:25.811149  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:25.811200  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:25.811127  346924 retry.go:31] will retry after 239.766839ms: waiting for machine to come up
	I0803 23:50:26.052890  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.053455  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.053526  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.053417  346924 retry.go:31] will retry after 350.096869ms: waiting for machine to come up
	I0803 23:50:26.404999  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.405425  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.405450  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.405378  346924 retry.go:31] will retry after 426.316752ms: waiting for machine to come up
	I0803 23:50:26.832924  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.833346  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.833377  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.833286  346924 retry.go:31] will retry after 468.911288ms: waiting for machine to come up
	I0803 23:50:27.303717  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:27.304186  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:27.304209  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:27.304153  346924 retry.go:31] will retry after 588.198491ms: waiting for machine to come up
	I0803 23:50:27.893918  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:27.894345  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:27.894376  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:27.894289  346924 retry.go:31] will retry after 756.527198ms: waiting for machine to come up
	I0803 23:50:28.652222  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:28.652692  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:28.652722  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:28.652635  346924 retry.go:31] will retry after 956.618375ms: waiting for machine to come up
	I0803 23:50:29.610577  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:29.611053  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:29.611081  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:29.611003  346924 retry.go:31] will retry after 894.193355ms: waiting for machine to come up
	I0803 23:50:30.506910  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:30.507443  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:30.507475  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:30.507383  346924 retry.go:31] will retry after 1.475070752s: waiting for machine to come up
	I0803 23:50:31.984363  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:31.984792  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:31.984823  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:31.984738  346924 retry.go:31] will retry after 1.96830202s: waiting for machine to come up
	I0803 23:50:33.954805  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:33.955250  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:33.955283  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:33.955190  346924 retry.go:31] will retry after 2.345601343s: waiting for machine to come up
	I0803 23:50:36.302961  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:36.303447  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:36.303478  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:36.303397  346924 retry.go:31] will retry after 2.267010238s: waiting for machine to come up
	I0803 23:50:38.571635  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:38.572141  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:38.572165  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:38.572088  346924 retry.go:31] will retry after 4.429291681s: waiting for machine to come up
	I0803 23:50:43.003156  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:43.003613  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:43.003638  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:43.003558  346924 retry.go:31] will retry after 3.48372957s: waiting for machine to come up
	I0803 23:50:46.490110  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.490603  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has current primary IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.490633  346092 main.go:141] libmachine: (ha-349588-m03) Found IP for machine: 192.168.39.79
	I0803 23:50:46.490655  346092 main.go:141] libmachine: (ha-349588-m03) Reserving static IP address...
	I0803 23:50:46.491072  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find host DHCP lease matching {name: "ha-349588-m03", mac: "52:54:00:1d:c9:03", ip: "192.168.39.79"} in network mk-ha-349588
	I0803 23:50:46.573000  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Getting to WaitForSSH function...
	I0803 23:50:46.573036  346092 main.go:141] libmachine: (ha-349588-m03) Reserved static IP address: 192.168.39.79
	I0803 23:50:46.573049  346092 main.go:141] libmachine: (ha-349588-m03) Waiting for SSH to be available...
	I0803 23:50:46.575539  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.575870  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.575901  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.576123  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using SSH client type: external
	I0803 23:50:46.576160  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa (-rw-------)
	I0803 23:50:46.576188  346092 main.go:141] libmachine: (ha-349588-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:50:46.576201  346092 main.go:141] libmachine: (ha-349588-m03) DBG | About to run SSH command:
	I0803 23:50:46.576213  346092 main.go:141] libmachine: (ha-349588-m03) DBG | exit 0
	I0803 23:50:46.710090  346092 main.go:141] libmachine: (ha-349588-m03) DBG | SSH cmd err, output: <nil>: 
	I0803 23:50:46.710376  346092 main.go:141] libmachine: (ha-349588-m03) KVM machine creation complete!
	I0803 23:50:46.710702  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:46.711288  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:46.711523  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:46.711699  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:50:46.711715  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:50:46.713405  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:50:46.713420  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:50:46.713426  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:50:46.713432  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.715823  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.716240  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.716262  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.716392  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.716587  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.716764  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.716943  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.717168  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.717414  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.717426  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:50:46.833008  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:50:46.833043  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:50:46.833055  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.836050  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.836542  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.836581  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.836685  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.836896  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.837102  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.837277  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.837427  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.837659  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.837674  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:50:46.954626  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:50:46.954732  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:50:46.954746  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:50:46.954761  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:46.955024  346092 buildroot.go:166] provisioning hostname "ha-349588-m03"
	I0803 23:50:46.955054  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:46.955260  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.958280  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.958653  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.958677  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.958827  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.959018  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.959199  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.959356  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.959528  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.959713  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.959727  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588-m03 && echo "ha-349588-m03" | sudo tee /etc/hostname
	I0803 23:50:47.091774  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588-m03
	
	I0803 23:50:47.091816  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.095084  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.095475  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.095509  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.095705  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.095912  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.096140  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.096327  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.096531  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:47.096732  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:47.096764  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:50:47.224450  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:50:47.224503  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:50:47.224536  346092 buildroot.go:174] setting up certificates
	I0803 23:50:47.224547  346092 provision.go:84] configureAuth start
	I0803 23:50:47.224561  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:47.224940  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:47.228138  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.228514  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.228544  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.228711  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.231105  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.231425  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.231449  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.231559  346092 provision.go:143] copyHostCerts
	I0803 23:50:47.231597  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:50:47.231642  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:50:47.231680  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:50:47.231784  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:50:47.231887  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:50:47.231914  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:50:47.231924  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:50:47.231961  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:50:47.232050  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:50:47.232071  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:50:47.232075  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:50:47.232099  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:50:47.232148  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588-m03 san=[127.0.0.1 192.168.39.79 ha-349588-m03 localhost minikube]
	I0803 23:50:47.399469  346092 provision.go:177] copyRemoteCerts
	I0803 23:50:47.399534  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:50:47.399562  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.402686  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.403211  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.403235  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.403420  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.403606  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.403793  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.403925  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:47.492467  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:50:47.492566  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:50:47.517280  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:50:47.517354  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:50:47.542649  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:50:47.542733  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:50:47.568029  346092 provision.go:87] duration metric: took 343.464982ms to configureAuth
	I0803 23:50:47.568066  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:50:47.568348  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:47.568459  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.571724  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.572147  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.572177  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.572434  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.572661  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.572844  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.573018  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.573266  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:47.573499  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:47.573552  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:50:47.853244  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:50:47.853276  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:50:47.853285  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetURL
	I0803 23:50:47.854683  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using libvirt version 6000000
	I0803 23:50:47.856880  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.857234  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.857272  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.857396  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:50:47.857411  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:50:47.857419  346092 client.go:171] duration metric: took 23.733587583s to LocalClient.Create
	I0803 23:50:47.857445  346092 start.go:167] duration metric: took 23.733655538s to libmachine.API.Create "ha-349588"
	I0803 23:50:47.857455  346092 start.go:293] postStartSetup for "ha-349588-m03" (driver="kvm2")
	I0803 23:50:47.857465  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:50:47.857481  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:47.857750  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:50:47.857787  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.859967  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.860290  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.860314  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.860473  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.860661  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.860856  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.861033  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:47.950131  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:50:47.954819  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:50:47.954849  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:50:47.954920  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:50:47.955013  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:50:47.955026  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:50:47.955136  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:50:47.965629  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:50:47.991355  346092 start.go:296] duration metric: took 133.884824ms for postStartSetup
	I0803 23:50:47.991428  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:47.992144  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:47.995389  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.995867  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.995892  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.996186  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:47.996383  346092 start.go:128] duration metric: took 23.893015539s to createHost
	I0803 23:50:47.996409  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.998754  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.999113  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.999143  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.999287  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.999474  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.999669  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.999821  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.000025  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:48.000233  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:48.000247  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:50:48.118642  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729048.091068532
	
	I0803 23:50:48.118694  346092 fix.go:216] guest clock: 1722729048.091068532
	I0803 23:50:48.118704  346092 fix.go:229] Guest: 2024-08-03 23:50:48.091068532 +0000 UTC Remote: 2024-08-03 23:50:47.996396829 +0000 UTC m=+158.613815502 (delta=94.671703ms)
	I0803 23:50:48.118730  346092 fix.go:200] guest clock delta is within tolerance: 94.671703ms
	I0803 23:50:48.118739  346092 start.go:83] releasing machines lock for "ha-349588-m03", held for 24.015487886s
	I0803 23:50:48.118770  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.119061  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:48.121626  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.121930  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.121964  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.124323  346092 out.go:177] * Found network options:
	I0803 23:50:48.126077  346092 out.go:177]   - NO_PROXY=192.168.39.168,192.168.39.67
	W0803 23:50:48.127478  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:50:48.127501  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:50:48.127518  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128153  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128346  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128449  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:50:48.128485  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	W0803 23:50:48.128555  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:50:48.128576  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:50:48.128633  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:50:48.128650  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:48.131323  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131347  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131817  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.131848  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131891  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.131906  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.132081  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:48.132094  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:48.132320  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:48.132324  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:48.132523  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.132533  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.132701  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:48.132773  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:48.389379  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:50:48.395859  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:50:48.395928  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:50:48.415300  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:50:48.415326  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:50:48.415389  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:50:48.434790  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:50:48.449942  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:50:48.450002  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:50:48.464339  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:50:48.479343  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:50:48.598044  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:50:48.771836  346092 docker.go:233] disabling docker service ...
	I0803 23:50:48.771936  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:50:48.786743  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:50:48.800909  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:50:48.929721  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:50:49.070946  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:50:49.085981  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:50:49.107145  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:50:49.107204  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.118494  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:50:49.118562  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.129818  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.141337  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.152936  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:50:49.165557  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.176476  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.195609  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.206645  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:50:49.216707  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:50:49.216779  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:50:49.229560  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:50:49.240199  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:49.363339  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:50:49.509934  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:50:49.510026  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:50:49.515470  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:50:49.515551  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:50:49.519688  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:50:49.558552  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:50:49.558653  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:50:49.588140  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:50:49.618274  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:50:49.619575  346092 out.go:177]   - env NO_PROXY=192.168.39.168
	I0803 23:50:49.620837  346092 out.go:177]   - env NO_PROXY=192.168.39.168,192.168.39.67
	I0803 23:50:49.622108  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:49.624763  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:49.625127  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:49.625156  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:49.625361  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:50:49.629549  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:50:49.642294  346092 mustload.go:65] Loading cluster: ha-349588
	I0803 23:50:49.642557  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:49.642856  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:49.642907  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:49.661314  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0803 23:50:49.661775  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:49.662267  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:49.662289  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:49.662672  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:49.662927  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:50:49.664647  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:50:49.665078  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:49.665123  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:49.681650  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0803 23:50:49.682116  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:49.682716  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:49.682741  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:49.683105  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:49.683339  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:50:49.683495  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.79
	I0803 23:50:49.683508  346092 certs.go:194] generating shared ca certs ...
	I0803 23:50:49.683525  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.683695  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:50:49.683752  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:50:49.683765  346092 certs.go:256] generating profile certs ...
	I0803 23:50:49.683876  346092 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:50:49.683910  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80
	I0803 23:50:49.683937  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.79 192.168.39.254]
	I0803 23:50:49.893374  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 ...
	I0803 23:50:49.893411  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80: {Name:mkdc2fe11503b9f1d1c4c6c90e0b1df90eefa7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.893608  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80 ...
	I0803 23:50:49.893627  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80: {Name:mk4257b808aff31998eea42cc17d84d4d90cd6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.893730  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:50:49.893899  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:50:49.894070  346092 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:50:49.894092  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:50:49.894112  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:50:49.894132  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:50:49.894149  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:50:49.894168  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:50:49.894188  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:50:49.894206  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:50:49.894225  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:50:49.894291  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:50:49.894333  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:50:49.894348  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:50:49.894383  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:50:49.894416  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:50:49.894447  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:50:49.894501  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:50:49.894539  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:50:49.894563  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:49.894581  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:50:49.894629  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:50:49.897587  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:49.897949  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:50:49.897980  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:49.898200  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:50:49.898435  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:50:49.898608  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:50:49.898763  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:50:49.969917  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:50:49.975416  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:50:49.988587  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:50:49.995102  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0803 23:50:50.010263  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:50:50.015483  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:50:50.027162  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:50:50.031962  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:50:50.043075  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:50:50.047433  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:50:50.061685  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:50:50.066714  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:50:50.078785  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:50:50.107115  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:50:50.132767  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:50:50.158356  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:50:50.183481  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0803 23:50:50.208890  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:50:50.233259  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:50:50.258319  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:50:50.283420  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:50:50.308734  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:50:50.332877  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:50:50.358589  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:50:50.378002  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0803 23:50:50.397027  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:50:50.415515  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:50:50.432653  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:50:50.451386  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:50:50.469186  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:50:50.487462  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:50:50.494163  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:50:50.506441  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.511410  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.511508  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.518230  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:50:50.529617  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:50:50.541105  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.545860  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.545931  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.551998  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:50:50.563960  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:50:50.575845  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.580600  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.580681  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.586680  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:50:50.598021  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:50:50.602251  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:50:50.602313  346092 kubeadm.go:934] updating node {m03 192.168.39.79 8443 v1.30.3 crio true true} ...
	I0803 23:50:50.602404  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:50:50.602429  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:50:50.602467  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:50:50.619648  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:50:50.619721  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:50:50.619777  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:50:50.630085  346092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:50:50.630144  346092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:50:50.640083  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:50:50.640126  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:50:50.640138  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0803 23:50:50.640144  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0803 23:50:50.640152  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:50:50.640196  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:50:50.640219  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:50:50.640219  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:50:50.658556  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:50:50.658604  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:50:50.658606  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:50:50.658650  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:50:50.658680  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:50:50.658723  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:50:50.690959  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:50:50.691012  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:50:51.642253  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:50:51.652555  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:50:51.670949  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:50:51.689089  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:50:51.706834  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:50:51.711106  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:50:51.724182  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:51.847681  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:50:51.869416  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:50:51.869884  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:51.869941  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:51.886556  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0803 23:50:51.888144  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:51.888782  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:51.888815  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:51.889193  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:51.889432  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:50:51.889615  346092 start.go:317] joinCluster: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:50:51.889756  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:50:51.889775  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:50:51.893005  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:51.893469  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:50:51.893519  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:51.893703  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:50:51.893926  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:50:51.894096  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:50:51.894277  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:50:52.068131  346092 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:52.068197  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jhmyct.fs9mmu6drhozseqf --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m03 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443"
	I0803 23:51:16.339682  346092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jhmyct.fs9mmu6drhozseqf --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m03 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443": (24.271445189s)
	I0803 23:51:16.339733  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:51:17.004233  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588-m03 minikube.k8s.io/updated_at=2024_08_03T23_51_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=false
	I0803 23:51:17.133330  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-349588-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:51:17.258692  346092 start.go:319] duration metric: took 25.369072533s to joinCluster
	I0803 23:51:17.258795  346092 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:51:17.259136  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:51:17.260411  346092 out.go:177] * Verifying Kubernetes components...
	I0803 23:51:17.261728  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:51:17.568914  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:51:17.612603  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:51:17.613015  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:51:17.613118  346092 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.168:8443
	I0803 23:51:17.613361  346092 node_ready.go:35] waiting up to 6m0s for node "ha-349588-m03" to be "Ready" ...
	I0803 23:51:17.613453  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:17.613464  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:17.613472  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:17.613477  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:17.616902  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:18.113863  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:18.113893  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:18.113905  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:18.113910  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:18.117470  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:18.613693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:18.613717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:18.613727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:18.613735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:18.617494  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:19.114494  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:19.114519  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:19.114528  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:19.114533  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:19.118484  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:19.614244  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:19.614267  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:19.614278  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:19.614289  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:19.618392  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:19.619001  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:20.114427  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:20.114450  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:20.114458  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:20.114463  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:20.118888  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:20.614464  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:20.614496  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:20.614509  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:20.614515  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:20.620459  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:51:21.114661  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:21.114690  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:21.114701  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:21.114706  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:21.119029  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:21.613753  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:21.613779  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:21.613788  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:21.613794  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:21.617083  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:22.114520  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:22.114548  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:22.114559  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:22.114564  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:22.117991  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:22.118703  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:22.614188  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:22.614211  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:22.614220  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:22.614223  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:22.617880  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:23.113699  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:23.113732  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:23.113741  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:23.113747  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:23.117692  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:23.613659  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:23.613686  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:23.613695  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:23.613698  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:23.617919  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:24.113692  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:24.113722  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:24.113731  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:24.113735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:24.117496  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:24.613911  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:24.613936  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:24.613945  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:24.613951  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:24.617695  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:24.618387  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:25.113589  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:25.113615  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:25.113622  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:25.113625  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:25.117040  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:25.614494  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:25.614518  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:25.614527  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:25.614530  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:25.618351  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:26.114570  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:26.114600  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:26.114613  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:26.114617  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:26.117984  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:26.613722  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:26.613752  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:26.613762  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:26.613765  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:26.617091  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:27.113607  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:27.113636  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:27.113646  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:27.113651  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:27.116932  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:27.117575  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:27.613684  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:27.613707  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:27.613716  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:27.613719  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:27.617010  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:28.114036  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:28.114061  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:28.114072  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:28.114077  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:28.117689  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:28.613694  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:28.613717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:28.613727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:28.613731  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:28.617175  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:29.114486  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:29.114516  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:29.114528  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:29.114534  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:29.117960  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:29.118584  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:29.614581  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:29.614606  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:29.614615  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:29.614619  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:29.618157  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:30.113701  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:30.113728  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:30.113738  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:30.113745  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:30.118868  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:51:30.614474  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:30.614503  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:30.614516  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:30.614522  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:30.618005  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.113747  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.113773  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.113784  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.113789  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.117249  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.117818  346092 node_ready.go:49] node "ha-349588-m03" has status "Ready":"True"
	I0803 23:51:31.117844  346092 node_ready.go:38] duration metric: took 13.504465294s for node "ha-349588-m03" to be "Ready" ...
	I0803 23:51:31.117857  346092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:51:31.117936  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:31.117948  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.117957  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.117963  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.125096  346092 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:51:31.132659  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.132757  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fzmtg
	I0803 23:51:31.132765  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.132773  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.132777  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.136446  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.137409  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.137425  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.137433  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.137437  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.140711  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.141628  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.141652  346092 pod_ready.go:81] duration metric: took 8.959263ms for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.141664  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.141746  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8qt6
	I0803 23:51:31.141756  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.141766  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.141774  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.144612  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.145703  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.145717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.145724  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.145729  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.148882  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.149402  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.149422  346092 pod_ready.go:81] duration metric: took 7.748921ms for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.149433  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.149524  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588
	I0803 23:51:31.149537  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.149547  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.149554  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.151974  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.152558  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.152572  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.152579  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.152583  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.154985  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.155502  346092 pod_ready.go:92] pod "etcd-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.155526  346092 pod_ready.go:81] duration metric: took 6.085151ms for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.155537  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.155596  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m02
	I0803 23:51:31.155603  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.155610  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.155613  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.158896  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.159772  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:31.159786  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.159793  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.159797  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.162550  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.163470  346092 pod_ready.go:92] pod "etcd-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.163488  346092 pod_ready.go:81] duration metric: took 7.945539ms for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.163497  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.313805  346092 request.go:629] Waited for 150.235244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m03
	I0803 23:51:31.313887  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m03
	I0803 23:51:31.313894  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.313903  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.313910  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.316950  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.514235  346092 request.go:629] Waited for 196.41936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.514342  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.514350  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.514360  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.514370  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.517499  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.518373  346092 pod_ready.go:92] pod "etcd-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.518391  346092 pod_ready.go:81] duration metric: took 354.888561ms for pod "etcd-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.518408  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.714574  346092 request.go:629] Waited for 196.078655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:51:31.714640  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:51:31.714645  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.714654  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.714660  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.718192  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.914516  346092 request.go:629] Waited for 195.494317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.914594  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.914602  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.914614  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.914624  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.920699  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:31.922297  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.922322  346092 pod_ready.go:81] duration metric: took 403.9068ms for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.922337  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.114309  346092 request.go:629] Waited for 191.882286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:51:32.114410  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:51:32.114422  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.114436  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.114446  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.118362  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.314850  346092 request.go:629] Waited for 195.414465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:32.314943  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:32.314954  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.314968  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.314978  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.319424  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:32.319937  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:32.319956  346092 pod_ready.go:81] duration metric: took 397.612453ms for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.319968  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.514132  346092 request.go:629] Waited for 194.066274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m03
	I0803 23:51:32.514207  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m03
	I0803 23:51:32.514218  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.514230  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.514239  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.517826  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.714186  346092 request.go:629] Waited for 195.384867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:32.714263  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:32.714268  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.714276  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.714280  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.717622  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.718276  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:32.718295  346092 pod_ready.go:81] duration metric: took 398.320232ms for pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.718305  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.914423  346092 request.go:629] Waited for 196.027987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:51:32.914519  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:51:32.914531  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.914544  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.914557  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.918214  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.114290  346092 request.go:629] Waited for 195.385789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:33.114354  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:33.114359  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.114367  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.114372  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.118031  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.118758  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.118786  346092 pod_ready.go:81] duration metric: took 400.47234ms for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.118801  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.313773  346092 request.go:629] Waited for 194.874757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:51:33.313869  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:51:33.313886  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.313897  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.313904  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.322352  346092 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0803 23:51:33.514604  346092 request.go:629] Waited for 191.39455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:33.514693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:33.514701  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.514733  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.514761  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.518436  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.519029  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.519057  346092 pod_ready.go:81] duration metric: took 400.246953ms for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.519070  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.714097  346092 request.go:629] Waited for 194.942392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m03
	I0803 23:51:33.714177  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m03
	I0803 23:51:33.714183  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.714191  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.714198  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.718005  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.914140  346092 request.go:629] Waited for 195.367976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:33.914237  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:33.914248  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.914260  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.914268  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.918105  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.918773  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.918794  346092 pod_ready.go:81] duration metric: took 399.718883ms for pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.918804  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.113900  346092 request.go:629] Waited for 194.98485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:51:34.113982  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:51:34.113991  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.114001  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.114010  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.117261  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.313842  346092 request.go:629] Waited for 195.884146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:34.313923  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:34.313928  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.313936  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.313941  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.318055  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:34.318690  346092 pod_ready.go:92] pod "kube-proxy-bbzdt" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:34.318718  346092 pod_ready.go:81] duration metric: took 399.906769ms for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.318733  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.514717  346092 request.go:629] Waited for 195.884216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:51:34.514827  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:51:34.514837  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.514846  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.514857  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.518454  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.713786  346092 request.go:629] Waited for 194.249312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:34.713853  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:34.713858  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.713867  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.713872  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.717311  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.718121  346092 pod_ready.go:92] pod "kube-proxy-gbg5q" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:34.718146  346092 pod_ready.go:81] duration metric: took 399.405642ms for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.718156  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gxhmd" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.914190  346092 request.go:629] Waited for 195.951933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gxhmd
	I0803 23:51:34.914334  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gxhmd
	I0803 23:51:34.914349  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.914359  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.914368  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.918014  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.114246  346092 request.go:629] Waited for 195.393665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:35.114346  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:35.114351  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.114360  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.114364  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.120400  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:35.120927  346092 pod_ready.go:92] pod "kube-proxy-gxhmd" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.120947  346092 pod_ready.go:81] duration metric: took 402.784938ms for pod "kube-proxy-gxhmd" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.120957  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.314134  346092 request.go:629] Waited for 193.077756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:51:35.314197  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:51:35.314204  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.314212  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.314216  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.317495  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.514753  346092 request.go:629] Waited for 196.397541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:35.514819  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:35.514824  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.514832  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.514837  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.518678  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.519382  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.519403  346092 pod_ready.go:81] duration metric: took 398.440069ms for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.519413  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.714054  346092 request.go:629] Waited for 194.546982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:51:35.714123  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:51:35.714131  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.714139  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.714143  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.717555  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.914758  346092 request.go:629] Waited for 196.375402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:35.914818  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:35.914824  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.914832  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.914836  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.918263  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.918956  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.918981  346092 pod_ready.go:81] duration metric: took 399.560987ms for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.918996  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:36.114089  346092 request.go:629] Waited for 195.010266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m03
	I0803 23:51:36.114169  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m03
	I0803 23:51:36.114176  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.114187  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.114203  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.117295  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:36.314318  346092 request.go:629] Waited for 196.362498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:36.314391  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:36.314396  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.314405  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.314408  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.317683  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:36.318319  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:36.318338  346092 pod_ready.go:81] duration metric: took 399.336283ms for pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:36.318349  346092 pod_ready.go:38] duration metric: took 5.200478543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:51:36.318365  346092 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:51:36.318431  346092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:51:36.335947  346092 api_server.go:72] duration metric: took 19.077109461s to wait for apiserver process to appear ...
	I0803 23:51:36.335981  346092 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:51:36.336001  346092 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0803 23:51:36.342426  346092 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0803 23:51:36.342513  346092 round_trippers.go:463] GET https://192.168.39.168:8443/version
	I0803 23:51:36.342524  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.342534  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.342541  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.343354  346092 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:51:36.343424  346092 api_server.go:141] control plane version: v1.30.3
	I0803 23:51:36.343444  346092 api_server.go:131] duration metric: took 7.456114ms to wait for apiserver health ...
	I0803 23:51:36.343454  346092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:51:36.514719  346092 request.go:629] Waited for 171.163392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.514813  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.514819  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.514826  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.514831  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.521672  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:36.528449  346092 system_pods.go:59] 24 kube-system pods found
	I0803 23:51:36.528484  346092 system_pods.go:61] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:51:36.528488  346092 system_pods.go:61] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:51:36.528492  346092 system_pods.go:61] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:51:36.528496  346092 system_pods.go:61] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:51:36.528499  346092 system_pods.go:61] "etcd-ha-349588-m03" [b94d4e04-56f0-4892-927a-346559af3711] Running
	I0803 23:51:36.528502  346092 system_pods.go:61] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:51:36.528505  346092 system_pods.go:61] "kindnet-7sr59" [09355fc1-1a86-4f3f-be39-4e2e315e679f] Running
	I0803 23:51:36.528508  346092 system_pods.go:61] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:51:36.528511  346092 system_pods.go:61] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:51:36.528515  346092 system_pods.go:61] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:51:36.528518  346092 system_pods.go:61] "kube-apiserver-ha-349588-m03" [fb835dfe-b2d1-49ea-be6a-1c2f2c682095] Running
	I0803 23:51:36.528521  346092 system_pods.go:61] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:51:36.528524  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:51:36.528528  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m03" [c4531c53-f3ca-42ef-a58b-1c30e752607b] Running
	I0803 23:51:36.528530  346092 system_pods.go:61] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:51:36.528533  346092 system_pods.go:61] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:51:36.528537  346092 system_pods.go:61] "kube-proxy-gxhmd" [4781a85e-af7c-49c2-80fb-c85db217189e] Running
	I0803 23:51:36.528540  346092 system_pods.go:61] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:51:36.528543  346092 system_pods.go:61] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:51:36.528549  346092 system_pods.go:61] "kube-scheduler-ha-349588-m03" [49495c84-d655-44a6-b732-a3520fc9e4db] Running
	I0803 23:51:36.528552  346092 system_pods.go:61] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:51:36.528555  346092 system_pods.go:61] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:51:36.528558  346092 system_pods.go:61] "kube-vip-ha-349588-m03" [17db3ee6-75d6-44a2-b663-22eb669c3916] Running
	I0803 23:51:36.528561  346092 system_pods.go:61] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:51:36.528567  346092 system_pods.go:74] duration metric: took 185.106343ms to wait for pod list to return data ...
	I0803 23:51:36.528578  346092 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:51:36.714053  346092 request.go:629] Waited for 185.392294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:51:36.714147  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:51:36.714158  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.714167  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.714172  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.718328  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:36.718486  346092 default_sa.go:45] found service account: "default"
	I0803 23:51:36.718504  346092 default_sa.go:55] duration metric: took 189.92038ms for default service account to be created ...
	I0803 23:51:36.718512  346092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:51:36.914027  346092 request.go:629] Waited for 195.407927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.914096  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.914102  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.914112  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.914120  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.920598  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:36.927121  346092 system_pods.go:86] 24 kube-system pods found
	I0803 23:51:36.927158  346092 system_pods.go:89] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:51:36.927164  346092 system_pods.go:89] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:51:36.927168  346092 system_pods.go:89] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:51:36.927172  346092 system_pods.go:89] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:51:36.927176  346092 system_pods.go:89] "etcd-ha-349588-m03" [b94d4e04-56f0-4892-927a-346559af3711] Running
	I0803 23:51:36.927181  346092 system_pods.go:89] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:51:36.927185  346092 system_pods.go:89] "kindnet-7sr59" [09355fc1-1a86-4f3f-be39-4e2e315e679f] Running
	I0803 23:51:36.927189  346092 system_pods.go:89] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:51:36.927192  346092 system_pods.go:89] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:51:36.927196  346092 system_pods.go:89] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:51:36.927200  346092 system_pods.go:89] "kube-apiserver-ha-349588-m03" [fb835dfe-b2d1-49ea-be6a-1c2f2c682095] Running
	I0803 23:51:36.927205  346092 system_pods.go:89] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:51:36.927211  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:51:36.927217  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m03" [c4531c53-f3ca-42ef-a58b-1c30e752607b] Running
	I0803 23:51:36.927222  346092 system_pods.go:89] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:51:36.927227  346092 system_pods.go:89] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:51:36.927233  346092 system_pods.go:89] "kube-proxy-gxhmd" [4781a85e-af7c-49c2-80fb-c85db217189e] Running
	I0803 23:51:36.927239  346092 system_pods.go:89] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:51:36.927246  346092 system_pods.go:89] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:51:36.927259  346092 system_pods.go:89] "kube-scheduler-ha-349588-m03" [49495c84-d655-44a6-b732-a3520fc9e4db] Running
	I0803 23:51:36.927264  346092 system_pods.go:89] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:51:36.927268  346092 system_pods.go:89] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:51:36.927275  346092 system_pods.go:89] "kube-vip-ha-349588-m03" [17db3ee6-75d6-44a2-b663-22eb669c3916] Running
	I0803 23:51:36.927285  346092 system_pods.go:89] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:51:36.927296  346092 system_pods.go:126] duration metric: took 208.777353ms to wait for k8s-apps to be running ...
	I0803 23:51:36.927304  346092 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:51:36.927363  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:51:36.945526  346092 system_svc.go:56] duration metric: took 18.195559ms WaitForService to wait for kubelet
	I0803 23:51:36.945565  346092 kubeadm.go:582] duration metric: took 19.686733073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:51:36.945591  346092 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:51:37.113811  346092 request.go:629] Waited for 168.118325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes
	I0803 23:51:37.113911  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes
	I0803 23:51:37.113922  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:37.113934  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:37.113943  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:37.117855  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:37.119104  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119131  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119165  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119171  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119180  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119185  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119192  346092 node_conditions.go:105] duration metric: took 173.595468ms to run NodePressure ...
	I0803 23:51:37.119210  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:51:37.119240  346092 start.go:255] writing updated cluster config ...
	I0803 23:51:37.119591  346092 ssh_runner.go:195] Run: rm -f paused
	I0803 23:51:37.173652  346092 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 23:51:37.175744  346092 out.go:177] * Done! kubectl is now configured to use "ha-349588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.250591294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729316250563177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2679dc69-de28-4dec-b02d-903e1b3669e9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.251209418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25b38948-e505-49af-9878-93574e1e4ba7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.251266254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25b38948-e505-49af-9878-93574e1e4ba7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.251599533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25b38948-e505-49af-9878-93574e1e4ba7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.294695344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e44ef7d-5f4b-4141-a1ae-067336ed74b8 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.294794875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e44ef7d-5f4b-4141-a1ae-067336ed74b8 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.296103535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a785295c-854c-4c75-a158-de1bb07d3796 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.296642974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729316296618036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a785295c-854c-4c75-a158-de1bb07d3796 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.297151459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86f42afe-150d-45c8-b39d-ed29c4f1a4b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.297209200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86f42afe-150d-45c8-b39d-ed29c4f1a4b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.297497356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86f42afe-150d-45c8-b39d-ed29c4f1a4b5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.340427560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b64e77d-5e25-4947-b8b0-7c1a677eac88 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.340561621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b64e77d-5e25-4947-b8b0-7c1a677eac88 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.342067640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9d5a5da-6888-4058-9732-fca0a65cd29c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.342910146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729316342883669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9d5a5da-6888-4058-9732-fca0a65cd29c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.343602473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2079224b-9654-47be-8a33-11e621be0415 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.343657813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2079224b-9654-47be-8a33-11e621be0415 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.343883633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2079224b-9654-47be-8a33-11e621be0415 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.384705246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be426c5e-26a5-4cbb-815a-613dd6380f3e name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.385108934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be426c5e-26a5-4cbb-815a-613dd6380f3e name=/runtime.v1.RuntimeService/Version
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.387762055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e026de8-faf1-4996-ae63-e9ea5f389340 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.388242303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729316388217403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e026de8-faf1-4996-ae63-e9ea5f389340 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.388919659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6c5de5c-bb0e-4792-84d3-84c6ce4567be name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.388981958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6c5de5c-bb0e-4792-84d3-84c6ce4567be name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:55:16 ha-349588 crio[685]: time="2024-08-03 23:55:16.389426492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6c5de5c-bb0e-4792-84d3-84c6ce4567be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6fd002f59b0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a2e2fb00f6b54       busybox-fc5497c4f-4mwk4
	ed4f0996565c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   c29d357fc68b0       storage-provisioner
	c780810d93e46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   37f34e1fe1b85       coredns-7db6d8ff4d-fzmtg
	81817890a62a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   925c168e44d83       coredns-7db6d8ff4d-z8qt6
	8706b763ebe33       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   d2e5e2b102cd4       kindnet-2q4kc
	1f48d6d5328f8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   842c0109e8643       kube-proxy-bbzdt
	4f4a81f925548       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   c58f6f98744c8       kube-vip-ha-349588
	9bd785365c881       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   69dc19cc2bbff       etcd-ha-349588
	f061678087351       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   16e8a700bcd71       kube-scheduler-ha-349588
	c7a32eac14445       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   5d722be95195f       kube-apiserver-ha-349588
	1b3755f3d86ea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   b6a89d83c0aaf       kube-controller-manager-ha-349588
	
	
	==> coredns [81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87] <==
	[INFO] 10.244.0.4:58030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146295s
	[INFO] 10.244.0.4:57522 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004292718s
	[INFO] 10.244.0.4:60466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198733s
	[INFO] 10.244.0.4:45293 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002739449s
	[INFO] 10.244.0.4:50180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129872s
	[INFO] 10.244.2.2:56181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186686s
	[INFO] 10.244.2.2:56701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166229s
	[INFO] 10.244.2.2:38728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109023s
	[INFO] 10.244.2.2:45155 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001333912s
	[INFO] 10.244.2.2:51605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083342s
	[INFO] 10.244.1.2:38219 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015823s
	[INFO] 10.244.1.2:52488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178675s
	[INFO] 10.244.1.2:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097525s
	[INFO] 10.244.0.4:55438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074628s
	[INFO] 10.244.2.2:36883 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010754s
	[INFO] 10.244.2.2:53841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090252s
	[INFO] 10.244.2.2:59602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092585s
	[INFO] 10.244.1.2:59266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147793s
	[INFO] 10.244.1.2:44530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122943s
	[INFO] 10.244.0.4:42192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097553s
	[INFO] 10.244.2.2:40701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172686s
	[INFO] 10.244.2.2:38338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166475s
	[INFO] 10.244.2.2:58001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140105s
	[INFO] 10.244.2.2:51129 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105337s
	[INFO] 10.244.1.2:44130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106258s
	
	
	==> coredns [c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d] <==
	[INFO] 10.244.1.2:47738 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000122865s
	[INFO] 10.244.1.2:35251 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000545486s
	[INFO] 10.244.0.4:59904 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165239s
	[INFO] 10.244.0.4:38273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132118s
	[INFO] 10.244.0.4:49517 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021182s
	[INFO] 10.244.2.2:39556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137234s
	[INFO] 10.244.2.2:60582 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141615s
	[INFO] 10.244.2.2:36052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074574s
	[INFO] 10.244.1.2:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019702s
	[INFO] 10.244.1.2:39746 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827365s
	[INFO] 10.244.1.2:47114 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078787s
	[INFO] 10.244.1.2:38856 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198841s
	[INFO] 10.244.1.2:49149 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428046s
	[INFO] 10.244.0.4:47461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104433s
	[INFO] 10.244.0.4:47790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083369s
	[INFO] 10.244.0.4:39525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161056s
	[INFO] 10.244.2.2:58034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169362s
	[INFO] 10.244.1.2:44282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187567s
	[INFO] 10.244.1.2:48438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016257s
	[INFO] 10.244.0.4:52544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142962s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152657s
	[INFO] 10.244.0.4:45953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009439s
	[INFO] 10.244.1.2:57136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.1.2:58739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139508s
	[INFO] 10.244.1.2:50023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125422s
	
	
	==> describe nodes <==
	Name:               ha-349588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:55:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:49:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-349588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72ab11669b434797a5e41b5352f74be2
	  System UUID:                72ab1166-9b43-4797-a5e4-1b5352f74be2
	  Boot ID:                    e1637c60-2dbe-4ea9-949e-0f2b10f03d1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4mwk4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-fzmtg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m8s
	  kube-system                 coredns-7db6d8ff4d-z8qt6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m8s
	  kube-system                 etcd-ha-349588                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m21s
	  kube-system                 kindnet-2q4kc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-349588             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-controller-manager-ha-349588    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-proxy-bbzdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-349588             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-vip-ha-349588                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m7s   kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-349588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-349588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-349588 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-349588 status is now: NodeReady
	  Normal  RegisteredNode           4m57s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal  RegisteredNode           3m45s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	
	
	Name:               ha-349588-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:49:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:52:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-349588-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8919c8bcbd284472a3c4b5b3ae885051
	  System UUID:                8919c8bc-bd28-4472-a3c4-b5b3ae885051
	  Boot ID:                    000b155d-14ed-4044-bb42-b52680d7292c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-szvhv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-349588-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-zqhp6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-349588-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-349588-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-proxy-gbg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-349588-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-vip-ha-349588-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m17s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  NodeNotReady             100s                   node-controller  Node ha-349588-m02 status is now: NodeNotReady
	
	
	Name:               ha-349588-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_51_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:55:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-349588-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43f3523f989d4c49bec19f93fe176e08
	  System UUID:                43f3523f-989d-4c49-bec1-9f93fe176e08
	  Boot ID:                    49cb00cd-1df4-4d0c-b32a-0575118d2aca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mlkx9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-349588-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-7sr59                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-ha-349588-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-controller-manager-ha-349588-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-proxy-gxhmd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-ha-349588-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-vip-ha-349588-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-349588-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal  RegisteredNode           3m45s                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	
	
	Name:               ha-349588-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_52_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:52:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:55:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-349588-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac9326af96243febea155e979b68343
	  System UUID:                4ac9326a-f962-43fe-bea1-55e979b68343
	  Boot ID:                    e2f3d546-daab-46ec-be7d-1fdf0a72df36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7rfzm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-2sdf6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m55s            kube-proxy       
	  Normal  RegisteredNode           3m               node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s            node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-349588-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 3 23:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051792] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040861] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.793620] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.513980] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.584703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.778088] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.061103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063697] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.170133] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.139803] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.274186] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.334862] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.414847] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.686183] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.066614] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.504623] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[Aug 3 23:49] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.728228] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.925424] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70] <==
	{"level":"warn","ts":"2024-08-03T23:55:16.585285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.672029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.683116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.685849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.690309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.720552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.729239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.736466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.740101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.743302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.752545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.760408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.7669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.771953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.775481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.785463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.788075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.797755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.808746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.813437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.817503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.824012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.832182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.840204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:55:16.885924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:55:16 up 7 min,  0 users,  load average: 0.21, 0.29, 0.16
	Linux ha-349588 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a] <==
	I0803 23:54:43.544002       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:54:53.547948       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:54:53.548040       1 main.go:299] handling current node
	I0803 23:54:53.548067       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:54:53.548085       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:54:53.548265       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:54:53.548288       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:54:53.548509       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:54:53.548554       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:55:03.552139       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:55:03.552186       1 main.go:299] handling current node
	I0803 23:55:03.552205       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:55:03.552211       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:55:03.552429       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:55:03.552452       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:55:03.552534       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:55:03.552554       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:55:13.542952       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:55:13.543121       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:55:13.543414       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:55:13.543478       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:55:13.543638       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:55:13.543675       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:55:13.543791       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:55:13.543823       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35] <==
	W0803 23:48:53.926522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168]
	I0803 23:48:53.927634       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:48:53.932255       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:48:54.119070       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0803 23:48:55.111182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:48:55.144783       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0803 23:48:55.170847       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:49:07.575208       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:49:08.182339       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0803 23:51:41.064985       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49338: use of closed network connection
	E0803 23:51:41.285771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49360: use of closed network connection
	E0803 23:51:41.481817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49368: use of closed network connection
	E0803 23:51:41.673398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49382: use of closed network connection
	E0803 23:51:41.860608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49400: use of closed network connection
	E0803 23:51:42.066800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49404: use of closed network connection
	E0803 23:51:42.265786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49410: use of closed network connection
	E0803 23:51:42.477198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49424: use of closed network connection
	E0803 23:51:42.661794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49434: use of closed network connection
	E0803 23:51:42.962222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49456: use of closed network connection
	E0803 23:51:43.153260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49474: use of closed network connection
	E0803 23:51:43.341009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49484: use of closed network connection
	E0803 23:51:43.542718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49500: use of closed network connection
	E0803 23:51:43.745794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55872: use of closed network connection
	E0803 23:51:43.941278       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55888: use of closed network connection
	W0803 23:53:03.929989       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168 192.168.39.79]
	
	
	==> kube-controller-manager [1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2] <==
	I0803 23:51:12.392050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-349588-m03\" does not exist"
	I0803 23:51:12.409979       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-349588-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:51:12.615890       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m03"
	I0803 23:51:38.099794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.870305ms"
	I0803 23:51:38.149470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.332864ms"
	I0803 23:51:38.150864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="656.77µs"
	I0803 23:51:38.180131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.73µs"
	I0803 23:51:38.336907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="149.035867ms"
	I0803 23:51:38.494837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.68925ms"
	I0803 23:51:38.552625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.651091ms"
	I0803 23:51:38.552748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.712µs"
	I0803 23:51:39.851012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.989391ms"
	I0803 23:51:39.851245       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.684µs"
	I0803 23:51:39.955292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.091µs"
	I0803 23:51:40.029088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.487775ms"
	I0803 23:51:40.029453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.265µs"
	I0803 23:51:40.582519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.030815ms"
	I0803 23:51:40.582675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.813µs"
	I0803 23:52:16.772216       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-349588-m04\" does not exist"
	I0803 23:52:16.799833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-349588-m04" podCIDRs=["10.244.3.0/24"]
	I0803 23:52:17.978544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m04"
	I0803 23:52:35.312225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-349588-m04"
	I0803 23:53:36.959632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-349588-m04"
	I0803 23:53:37.137008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.48034ms"
	I0803 23:53:37.137626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.676µs"
	
	
	==> kube-proxy [1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511] <==
	I0803 23:49:09.173626       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:49:09.204726       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	I0803 23:49:09.262456       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:49:09.262510       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:49:09.262529       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:49:09.265850       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:49:09.266449       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:49:09.266491       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:49:09.267962       1 config.go:192] "Starting service config controller"
	I0803 23:49:09.268231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:49:09.268329       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:49:09.268413       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:49:09.270529       1 config.go:319] "Starting node config controller"
	I0803 23:49:09.270556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:49:09.369309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:49:09.369490       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:49:09.372278       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802] <==
	W0803 23:48:53.304668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:48:53.304715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:48:53.354479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:48:53.354578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:48:53.485317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:48:53.485445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:48:53.514811       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:48:53.514858       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:48:55.773274       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:51:12.541229       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6mxhx\": pod kube-proxy-6mxhx is already assigned to node \"ha-349588-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6mxhx" node="ha-349588-m03"
	E0803 23:51:12.542506       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e0f96924-b772-456b-b2f6-698af8e94038(kube-system/kube-proxy-6mxhx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6mxhx"
	E0803 23:51:12.543622       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6mxhx\": pod kube-proxy-6mxhx is already assigned to node \"ha-349588-m03\"" pod="kube-system/kube-proxy-6mxhx"
	I0803 23:51:12.543720       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6mxhx" node="ha-349588-m03"
	E0803 23:52:16.874914       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7rfzm\": pod kindnet-7rfzm is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7rfzm" node="ha-349588-m04"
	E0803 23:52:16.875424       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b882822a-1717-446e-9816-b0d709515f5a(kube-system/kindnet-7rfzm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7rfzm"
	E0803 23:52:16.876997       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7rfzm\": pod kindnet-7rfzm is already assigned to node \"ha-349588-m04\"" pod="kube-system/kindnet-7rfzm"
	I0803 23:52:16.877333       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7rfzm" node="ha-349588-m04"
	E0803 23:52:16.874987       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2sdf6\": pod kube-proxy-2sdf6 is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2sdf6" node="ha-349588-m04"
	E0803 23:52:16.878219       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2c41bdec-3f55-4626-9c5b-b757faed7907(kube-system/kube-proxy-2sdf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2sdf6"
	E0803 23:52:16.878316       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2sdf6\": pod kube-proxy-2sdf6 is already assigned to node \"ha-349588-m04\"" pod="kube-system/kube-proxy-2sdf6"
	I0803 23:52:16.878440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2sdf6" node="ha-349588-m04"
	E0803 23:52:17.021480       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6rctf\": pod kube-proxy-6rctf is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6rctf" node="ha-349588-m04"
	E0803 23:52:17.021686       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 22d8b275-6e92-4f89-85b5-5138eb55855b(kube-system/kube-proxy-6rctf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6rctf"
	E0803 23:52:17.021811       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6rctf\": pod kube-proxy-6rctf is already assigned to node \"ha-349588-m04\"" pod="kube-system/kube-proxy-6rctf"
	I0803 23:52:17.022047       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6rctf" node="ha-349588-m04"
	
	
	==> kubelet <==
	Aug 03 23:50:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:50:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:50:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:51:38 ha-349588 kubelet[1373]: I0803 23:51:38.109289    1373 topology_manager.go:215] "Topology Admit Handler" podUID="a1f7a988-c439-426d-87ef-876b33660835" podNamespace="default" podName="busybox-fc5497c4f-4mwk4"
	Aug 03 23:51:38 ha-349588 kubelet[1373]: I0803 23:51:38.153663    1373 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lr5c\" (UniqueName: \"kubernetes.io/projected/a1f7a988-c439-426d-87ef-876b33660835-kube-api-access-9lr5c\") pod \"busybox-fc5497c4f-4mwk4\" (UID: \"a1f7a988-c439-426d-87ef-876b33660835\") " pod="default/busybox-fc5497c4f-4mwk4"
	Aug 03 23:51:55 ha-349588 kubelet[1373]: E0803 23:51:55.143837    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:51:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:52:55 ha-349588 kubelet[1373]: E0803 23:52:55.144049    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:52:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:53:55 ha-349588 kubelet[1373]: E0803 23:53:55.154670    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:53:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:54:55 ha-349588 kubelet[1373]: E0803 23:54:55.142276    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:54:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-349588 -n ha-349588
helpers_test.go:261: (dbg) Run:  kubectl --context ha-349588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (3.20659362s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:21.475673  350953 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:21.475805  350953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:21.475813  350953 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:21.475818  350953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:21.476019  350953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:21.476176  350953 out.go:298] Setting JSON to false
	I0803 23:55:21.476205  350953 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:21.476316  350953 notify.go:220] Checking for updates...
	I0803 23:55:21.476621  350953 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:21.476644  350953 status.go:255] checking status of ha-349588 ...
	I0803 23:55:21.477097  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.477173  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.498102  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39669
	I0803 23:55:21.498566  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.499220  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.499252  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.499754  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.499992  350953 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:21.501863  350953 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:21.501879  350953 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:21.502245  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.502289  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.518804  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0803 23:55:21.519280  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.519821  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.519864  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.520305  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.520508  350953 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:21.523524  350953 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:21.523878  350953 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:21.523906  350953 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:21.524025  350953 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:21.524384  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.524430  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.540216  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0803 23:55:21.540758  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.541274  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.541299  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.541709  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.541914  350953 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:21.542106  350953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:21.542132  350953 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:21.544886  350953 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:21.545353  350953 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:21.545383  350953 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:21.545539  350953 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:21.545729  350953 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:21.545893  350953 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:21.546060  350953 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:21.626212  350953 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:21.633248  350953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:21.648943  350953 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:21.648985  350953 api_server.go:166] Checking apiserver status ...
	I0803 23:55:21.649022  350953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:21.664483  350953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:21.674623  350953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:21.674680  350953 ssh_runner.go:195] Run: ls
	I0803 23:55:21.679563  350953 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:21.686513  350953 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:21.686544  350953 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:21.686557  350953 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:21.686578  350953 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:21.686906  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.686950  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.702824  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I0803 23:55:21.703321  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.703857  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.703880  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.704245  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.704480  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:21.706198  350953 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:21.706215  350953 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:21.706558  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.706606  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.723383  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0803 23:55:21.723935  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.724485  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.724511  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.724871  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.725084  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:21.727987  350953 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:21.728386  350953 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:21.728417  350953 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:21.728544  350953 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:21.728885  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:21.728937  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:21.744655  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0803 23:55:21.745077  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:21.745561  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:21.745581  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:21.745923  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:21.746129  350953 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:21.746339  350953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:21.746363  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:21.749412  350953 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:21.749849  350953 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:21.749873  350953 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:21.750067  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:21.750254  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:21.750413  350953 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:21.750569  350953 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:24.269866  350953 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:24.270014  350953 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:24.270040  350953 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:24.270052  350953 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:24.270092  350953 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:24.270104  350953 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:24.270443  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.270500  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.286405  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0803 23:55:24.286867  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.287413  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.287438  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.287775  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.288006  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:24.289704  350953 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:24.289725  350953 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:24.290038  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.290082  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.305599  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I0803 23:55:24.306103  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.306684  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.306708  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.307028  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.307213  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:24.309951  350953 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:24.310451  350953 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:24.310485  350953 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:24.310665  350953 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:24.311016  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.311065  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.326985  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0803 23:55:24.327548  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.328062  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.328086  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.328397  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.328590  350953 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:24.328760  350953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:24.328795  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:24.331541  350953 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:24.331992  350953 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:24.332022  350953 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:24.332199  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:24.332412  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:24.332611  350953 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:24.332781  350953 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:24.421328  350953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:24.439279  350953 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:24.439322  350953 api_server.go:166] Checking apiserver status ...
	I0803 23:55:24.439366  350953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:24.453636  350953 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:24.463648  350953 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:24.463715  350953 ssh_runner.go:195] Run: ls
	I0803 23:55:24.468282  350953 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:24.472882  350953 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:24.472912  350953 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:24.472921  350953 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:24.472939  350953 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:24.473242  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.473280  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.489967  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0803 23:55:24.490442  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.490887  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.490906  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.491232  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.491450  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:24.493106  350953 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:24.493122  350953 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:24.493475  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.493536  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.510058  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0803 23:55:24.510586  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.511124  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.511152  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.511495  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.511742  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:24.514686  350953 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:24.515122  350953 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:24.515166  350953 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:24.515267  350953 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:24.515731  350953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:24.515772  350953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:24.531507  350953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0803 23:55:24.531991  350953 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:24.532427  350953 main.go:141] libmachine: Using API Version  1
	I0803 23:55:24.532449  350953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:24.532718  350953 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:24.532871  350953 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:24.533061  350953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:24.533084  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:24.535640  350953 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:24.536165  350953 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:24.536194  350953 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:24.536339  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:24.536525  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:24.536689  350953 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:24.536833  350953 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:24.621811  350953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:24.636961  350953 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (4.865283924s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:26.134555  351037 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:26.135222  351037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:26.135289  351037 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:26.135310  351037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:26.135756  351037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:26.136323  351037 out.go:298] Setting JSON to false
	I0803 23:55:26.136361  351037 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:26.136475  351037 notify.go:220] Checking for updates...
	I0803 23:55:26.136762  351037 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:26.136778  351037 status.go:255] checking status of ha-349588 ...
	I0803 23:55:26.137260  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.137351  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.152734  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0803 23:55:26.153209  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.154072  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.154106  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.154463  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.154726  351037 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:26.156527  351037 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:26.156549  351037 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:26.156964  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.157022  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.172618  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0803 23:55:26.173092  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.173606  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.173630  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.173949  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.174143  351037 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:26.177107  351037 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:26.177651  351037 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:26.177682  351037 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:26.177858  351037 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:26.178179  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.178227  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.194214  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0803 23:55:26.194704  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.195378  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.195416  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.195805  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.196027  351037 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:26.196259  351037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:26.196292  351037 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:26.199650  351037 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:26.200223  351037 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:26.200250  351037 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:26.200407  351037 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:26.200619  351037 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:26.200806  351037 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:26.201066  351037 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:26.277669  351037 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:26.284079  351037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:26.299086  351037 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:26.299122  351037 api_server.go:166] Checking apiserver status ...
	I0803 23:55:26.299170  351037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:26.313781  351037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:26.324149  351037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:26.324218  351037 ssh_runner.go:195] Run: ls
	I0803 23:55:26.328831  351037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:26.333105  351037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:26.333134  351037 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:26.333145  351037 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:26.333170  351037 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:26.333488  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.333545  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.349483  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37289
	I0803 23:55:26.350019  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.350587  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.350608  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.350901  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.351084  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:26.352605  351037 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:26.352625  351037 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:26.352923  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.352959  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.371781  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0803 23:55:26.372283  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.372802  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.372822  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.373236  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.373447  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:26.376504  351037 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:26.377002  351037 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:26.377051  351037 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:26.377176  351037 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:26.377550  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:26.377601  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:26.393083  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I0803 23:55:26.393649  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:26.394169  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:26.394201  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:26.394558  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:26.394794  351037 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:26.395009  351037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:26.395039  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:26.397909  351037 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:26.398344  351037 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:26.398384  351037 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:26.398525  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:26.398703  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:26.398871  351037 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:26.398999  351037 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:27.341795  351037 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:27.341852  351037 retry.go:31] will retry after 177.214341ms: dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:30.573789  351037 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:30.573894  351037 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:30.573912  351037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:30.573920  351037 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:30.573941  351037 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:30.573948  351037 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:30.574312  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.574365  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.590184  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34029
	I0803 23:55:30.590601  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.591122  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.591142  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.591490  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.591708  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:30.593585  351037 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:30.593608  351037 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:30.593924  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.593960  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.609341  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0803 23:55:30.609786  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.610344  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.610370  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.610713  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.610999  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:30.613879  351037 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:30.614343  351037 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:30.614365  351037 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:30.614494  351037 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:30.614802  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.614841  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.633092  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0803 23:55:30.633560  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.634119  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.634150  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.634460  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.634636  351037 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:30.634856  351037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:30.634884  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:30.637756  351037 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:30.638156  351037 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:30.638198  351037 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:30.638302  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:30.638483  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:30.638610  351037 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:30.638720  351037 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:30.728565  351037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:30.752437  351037 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:30.752477  351037 api_server.go:166] Checking apiserver status ...
	I0803 23:55:30.752524  351037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:30.769338  351037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:30.779541  351037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:30.779610  351037 ssh_runner.go:195] Run: ls
	I0803 23:55:30.784446  351037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:30.789029  351037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:30.789058  351037 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:30.789068  351037 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:30.789084  351037 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:30.789432  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.789473  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.805451  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0803 23:55:30.805962  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.806659  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.806694  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.807071  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.807312  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:30.808917  351037 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:30.808937  351037 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:30.809269  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.809317  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.826563  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
	I0803 23:55:30.827072  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.827716  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.827746  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.828092  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.828294  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:30.831229  351037 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:30.831639  351037 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:30.831675  351037 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:30.831878  351037 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:30.832183  351037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:30.832223  351037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:30.847519  351037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0803 23:55:30.848032  351037 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:30.848524  351037 main.go:141] libmachine: Using API Version  1
	I0803 23:55:30.848548  351037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:30.848974  351037 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:30.849166  351037 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:30.849385  351037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:30.849408  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:30.852233  351037 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:30.852617  351037 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:30.852646  351037 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:30.852805  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:30.852998  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:30.853160  351037 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:30.853305  351037 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:30.937324  351037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:30.953784  351037 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (4.963126445s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:32.165443  351153 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:32.165593  351153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:32.165605  351153 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:32.165611  351153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:32.165814  351153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:32.166028  351153 out.go:298] Setting JSON to false
	I0803 23:55:32.166063  351153 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:32.166188  351153 notify.go:220] Checking for updates...
	I0803 23:55:32.166567  351153 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:32.166589  351153 status.go:255] checking status of ha-349588 ...
	I0803 23:55:32.167077  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.167155  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.185184  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0803 23:55:32.185679  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.186312  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.186335  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.186770  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.186980  351153 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:32.188657  351153 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:32.188679  351153 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:32.189110  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.189161  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.204510  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0803 23:55:32.204969  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.205574  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.205614  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.205936  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.206143  351153 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:32.208637  351153 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:32.209038  351153 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:32.209069  351153 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:32.209223  351153 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:32.209584  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.209630  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.225303  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34921
	I0803 23:55:32.225750  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.226277  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.226298  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.226653  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.226859  351153 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:32.227097  351153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:32.227134  351153 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:32.230151  351153 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:32.230589  351153 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:32.230629  351153 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:32.230789  351153 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:32.231002  351153 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:32.231167  351153 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:32.231357  351153 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:32.310256  351153 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:32.316822  351153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:32.332812  351153 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:32.332848  351153 api_server.go:166] Checking apiserver status ...
	I0803 23:55:32.332894  351153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:32.348500  351153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:32.359871  351153 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:32.359942  351153 ssh_runner.go:195] Run: ls
	I0803 23:55:32.364661  351153 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:32.371088  351153 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:32.371121  351153 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:32.371140  351153 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:32.371164  351153 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:32.371506  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.371549  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.387518  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0803 23:55:32.388056  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.388527  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.388550  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.388866  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.389051  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:32.390517  351153 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:32.390535  351153 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:32.390835  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.390875  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.407640  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0803 23:55:32.408088  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.408574  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.408596  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.408897  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.409124  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:32.412246  351153 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:32.412720  351153 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:32.412758  351153 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:32.412903  351153 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:32.413267  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:32.413311  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:32.429196  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39585
	I0803 23:55:32.429664  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:32.430162  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:32.430207  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:32.430639  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:32.430849  351153 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:32.431071  351153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:32.431095  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:32.433799  351153 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:32.434211  351153 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:32.434241  351153 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:32.434390  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:32.434589  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:32.434780  351153 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:32.434948  351153 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:33.645807  351153 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:33.645873  351153 retry.go:31] will retry after 272.49385ms: dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:36.718067  351153 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:36.718173  351153 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:36.718208  351153 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:36.718219  351153 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:36.718268  351153 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:36.718282  351153 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:36.718716  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.718779  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.734542  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I0803 23:55:36.735051  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.735545  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.735577  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.735899  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.736070  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:36.737788  351153 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:36.737823  351153 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:36.738137  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.738173  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.753618  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45839
	I0803 23:55:36.754073  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.754576  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.754596  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.754972  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.755208  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:36.758371  351153 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:36.758938  351153 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:36.758982  351153 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:36.759281  351153 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:36.759719  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.759766  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.775723  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43257
	I0803 23:55:36.776220  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.776736  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.776758  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.777067  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.777252  351153 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:36.777446  351153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:36.777467  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:36.780238  351153 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:36.780584  351153 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:36.780619  351153 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:36.780787  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:36.780963  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:36.781139  351153 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:36.781258  351153 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:36.865334  351153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:36.882046  351153 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:36.882080  351153 api_server.go:166] Checking apiserver status ...
	I0803 23:55:36.882120  351153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:36.897084  351153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:36.907669  351153 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:36.907743  351153 ssh_runner.go:195] Run: ls
	I0803 23:55:36.912672  351153 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:36.919236  351153 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:36.919267  351153 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:36.919276  351153 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:36.919293  351153 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:36.919601  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.919651  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.936005  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41667
	I0803 23:55:36.936535  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.937106  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.937129  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.937462  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.937740  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:36.939524  351153 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:36.939545  351153 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:36.939862  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.939918  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.954989  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0803 23:55:36.955422  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.955894  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.955919  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.956252  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.956452  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:36.959430  351153 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:36.959784  351153 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:36.959821  351153 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:36.959967  351153 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:36.960345  351153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:36.960387  351153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:36.978335  351153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0803 23:55:36.978743  351153 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:36.979244  351153 main.go:141] libmachine: Using API Version  1
	I0803 23:55:36.979266  351153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:36.979577  351153 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:36.979782  351153 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:36.980004  351153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:36.980026  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:36.982842  351153 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:36.983326  351153 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:36.983352  351153 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:36.983510  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:36.983700  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:36.983836  351153 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:36.983972  351153 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:37.065580  351153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:37.082423  351153 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (4.927408736s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:38.345142  351252 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:38.345414  351252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:38.345425  351252 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:38.345431  351252 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:38.345725  351252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:38.345970  351252 out.go:298] Setting JSON to false
	I0803 23:55:38.346004  351252 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:38.346055  351252 notify.go:220] Checking for updates...
	I0803 23:55:38.346384  351252 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:38.346399  351252 status.go:255] checking status of ha-349588 ...
	I0803 23:55:38.346765  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.346836  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.364588  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43413
	I0803 23:55:38.365085  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.365669  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.365693  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.366163  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.366388  351252 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:38.367972  351252 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:38.367993  351252 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:38.368315  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.368367  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.384469  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0803 23:55:38.384948  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.385413  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.385434  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.385877  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.386097  351252 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:38.388876  351252 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:38.389285  351252 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:38.389314  351252 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:38.389472  351252 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:38.389875  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.389924  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.406703  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0803 23:55:38.407133  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.407601  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.407624  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.407993  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.408193  351252 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:38.408406  351252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:38.408434  351252 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:38.411176  351252 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:38.411633  351252 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:38.411662  351252 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:38.411805  351252 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:38.411993  351252 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:38.412120  351252 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:38.412259  351252 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:38.491285  351252 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:38.501110  351252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:38.520671  351252 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:38.520709  351252 api_server.go:166] Checking apiserver status ...
	I0803 23:55:38.520754  351252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:38.536683  351252 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:38.547488  351252 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:38.547552  351252 ssh_runner.go:195] Run: ls
	I0803 23:55:38.552157  351252 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:38.556990  351252 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:38.557021  351252 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:38.557032  351252 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:38.557049  351252 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:38.557365  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.557406  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.573721  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0803 23:55:38.574176  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.574736  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.574761  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.575077  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.575286  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:38.577152  351252 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:38.577171  351252 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:38.577598  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.577646  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.593696  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42881
	I0803 23:55:38.594168  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.594668  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.594690  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.595047  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.595233  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:38.598670  351252 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:38.599140  351252 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:38.599168  351252 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:38.599343  351252 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:38.599654  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:38.599692  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:38.615408  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0803 23:55:38.615890  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:38.616473  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:38.616506  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:38.616871  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:38.617096  351252 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:38.617352  351252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:38.617381  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:38.620245  351252 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:38.620630  351252 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:38.620652  351252 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:38.620809  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:38.620990  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:38.621151  351252 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:38.621295  351252 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:39.789899  351252 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:39.789976  351252 retry.go:31] will retry after 353.309069ms: dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:42.861911  351252 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:42.862045  351252 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:42.862074  351252 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:42.862085  351252 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:42.862112  351252 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:42.862125  351252 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:42.862453  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:42.862526  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:42.878938  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I0803 23:55:42.879485  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:42.880101  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:42.880142  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:42.880771  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:42.881014  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:42.882676  351252 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:42.882695  351252 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:42.883188  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:42.883234  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:42.899412  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0803 23:55:42.899931  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:42.900502  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:42.900524  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:42.900833  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:42.901016  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:42.903767  351252 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:42.904332  351252 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:42.904361  351252 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:42.904573  351252 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:42.904933  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:42.904978  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:42.921480  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0803 23:55:42.921931  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:42.922468  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:42.922502  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:42.922876  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:42.923074  351252 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:42.923292  351252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:42.923332  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:42.926433  351252 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:42.926961  351252 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:42.926995  351252 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:42.927121  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:42.927310  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:42.927444  351252 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:42.927607  351252 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:43.013724  351252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:43.030777  351252 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:43.030808  351252 api_server.go:166] Checking apiserver status ...
	I0803 23:55:43.030865  351252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:43.045724  351252 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:43.056258  351252 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:43.056330  351252 ssh_runner.go:195] Run: ls
	I0803 23:55:43.061599  351252 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:43.066365  351252 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:43.066402  351252 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:43.066415  351252 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:43.066439  351252 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:43.066782  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:43.066826  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:43.082919  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0803 23:55:43.083439  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:43.083930  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:43.083955  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:43.084292  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:43.084503  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:43.086210  351252 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:43.086230  351252 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:43.086660  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:43.086710  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:43.102927  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0803 23:55:43.103466  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:43.104025  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:43.104069  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:43.104473  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:43.104658  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:43.107852  351252 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:43.108309  351252 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:43.108334  351252 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:43.108509  351252 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:43.108826  351252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:43.108870  351252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:43.124356  351252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0803 23:55:43.124831  351252 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:43.125404  351252 main.go:141] libmachine: Using API Version  1
	I0803 23:55:43.125434  351252 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:43.125841  351252 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:43.126031  351252 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:43.126269  351252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:43.126298  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:43.129116  351252 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:43.129591  351252 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:43.129616  351252 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:43.129753  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:43.129956  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:43.130126  351252 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:43.130280  351252 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:43.209972  351252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:43.226708  351252 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (3.771744192s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:48.279144  351369 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:48.279425  351369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:48.279436  351369 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:48.279441  351369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:48.279647  351369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:48.279843  351369 out.go:298] Setting JSON to false
	I0803 23:55:48.279880  351369 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:48.280012  351369 notify.go:220] Checking for updates...
	I0803 23:55:48.280450  351369 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:48.280475  351369 status.go:255] checking status of ha-349588 ...
	I0803 23:55:48.281007  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.281063  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.302347  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42333
	I0803 23:55:48.302938  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.303632  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.303683  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.304117  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.304348  351369 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:48.306284  351369 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:48.306307  351369 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:48.306721  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.306796  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.322221  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0803 23:55:48.322711  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.323341  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.323370  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.323737  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.323970  351369 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:48.326876  351369 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:48.327342  351369 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:48.327373  351369 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:48.327453  351369 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:48.327781  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.327833  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.343629  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0803 23:55:48.344053  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.344542  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.344573  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.344949  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.345169  351369 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:48.345379  351369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:48.345429  351369 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:48.348411  351369 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:48.348868  351369 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:48.348904  351369 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:48.349032  351369 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:48.349233  351369 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:48.349417  351369 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:48.349618  351369 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:48.429722  351369 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:48.436637  351369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:48.451430  351369 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:48.451464  351369 api_server.go:166] Checking apiserver status ...
	I0803 23:55:48.451500  351369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:48.467003  351369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:48.477709  351369 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:48.477773  351369 ssh_runner.go:195] Run: ls
	I0803 23:55:48.482416  351369 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:48.486824  351369 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:48.486846  351369 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:48.486856  351369 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:48.486871  351369 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:48.487196  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.487235  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.503212  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I0803 23:55:48.503771  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.504372  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.504399  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.504790  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.505036  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:48.506775  351369 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:48.506792  351369 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:48.507123  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.507158  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.522467  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
	I0803 23:55:48.522941  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.523497  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.523524  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.523902  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.524118  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:48.526922  351369 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:48.527371  351369 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:48.527394  351369 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:48.527545  351369 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:48.527947  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:48.528000  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:48.544240  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0803 23:55:48.544696  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:48.545241  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:48.545262  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:48.545591  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:48.545786  351369 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:48.545946  351369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:48.545962  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:48.548851  351369 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:48.549297  351369 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:48.549334  351369 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:48.549550  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:48.549744  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:48.549931  351369 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:48.550070  351369 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:51.633791  351369 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:51.633888  351369 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:51.633908  351369 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:51.633919  351369 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:51.633948  351369 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:51.633959  351369 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:51.634431  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.634509  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.651705  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36241
	I0803 23:55:51.652250  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.652775  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.652801  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.653094  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.653333  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:51.655037  351369 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:51.655056  351369 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:51.655350  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.655392  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.671187  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0803 23:55:51.671727  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.672218  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.672238  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.672558  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.672777  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:51.675876  351369 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:51.676379  351369 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:51.676406  351369 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:51.676551  351369 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:51.676908  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.676960  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.693396  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
	I0803 23:55:51.693949  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.694458  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.694480  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.694783  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.694948  351369 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:51.695116  351369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:51.695136  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:51.698192  351369 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:51.698676  351369 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:51.698711  351369 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:51.698884  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:51.699097  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:51.699235  351369 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:51.699362  351369 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:51.791745  351369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:51.809487  351369 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:51.809551  351369 api_server.go:166] Checking apiserver status ...
	I0803 23:55:51.809594  351369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:51.825497  351369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:51.836751  351369 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:51.836822  351369 ssh_runner.go:195] Run: ls
	I0803 23:55:51.841526  351369 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:51.846023  351369 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:51.846049  351369 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:51.846061  351369 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:51.846082  351369 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:51.846399  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.846445  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.862215  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0803 23:55:51.862725  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.863220  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.863243  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.863616  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.863848  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:51.865709  351369 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:51.865734  351369 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:51.866021  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.866061  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.882200  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0803 23:55:51.882685  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.883189  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.883244  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.883593  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.883846  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:51.886769  351369 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:51.887188  351369 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:51.887215  351369 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:51.887410  351369 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:51.887831  351369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:51.887886  351369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:51.903287  351369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38089
	I0803 23:55:51.903725  351369 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:51.904213  351369 main.go:141] libmachine: Using API Version  1
	I0803 23:55:51.904235  351369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:51.904577  351369 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:51.904795  351369 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:51.904996  351369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:51.905025  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:51.908007  351369 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:51.908446  351369 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:51.908477  351369 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:51.908606  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:51.908806  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:51.908955  351369 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:51.909124  351369 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:51.989185  351369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:52.004522  351369 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (3.757555583s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:55:56.024893  351470 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:55:56.025055  351470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:56.025065  351470 out.go:304] Setting ErrFile to fd 2...
	I0803 23:55:56.025070  351470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:55:56.025346  351470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:55:56.025620  351470 out.go:298] Setting JSON to false
	I0803 23:55:56.025663  351470 mustload.go:65] Loading cluster: ha-349588
	I0803 23:55:56.025713  351470 notify.go:220] Checking for updates...
	I0803 23:55:56.026127  351470 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:55:56.026146  351470 status.go:255] checking status of ha-349588 ...
	I0803 23:55:56.026592  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.026654  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.042582  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44525
	I0803 23:55:56.043034  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.043704  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.043740  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.044114  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.044320  351470 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:55:56.046019  351470 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:55:56.046041  351470 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:56.046337  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.046375  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.063319  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40665
	I0803 23:55:56.063792  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.064303  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.064329  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.064620  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.064859  351470 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:55:56.067835  351470 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:56.068412  351470 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:56.068472  351470 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:56.068701  351470 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:55:56.069125  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.069186  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.084948  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40165
	I0803 23:55:56.085430  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.085962  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.085981  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.086336  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.086538  351470 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:55:56.086759  351470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:56.086807  351470 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:55:56.089784  351470 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:56.090186  351470 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:55:56.090208  351470 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:55:56.090341  351470 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:55:56.090543  351470 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:55:56.090710  351470 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:55:56.090861  351470 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:55:56.169905  351470 ssh_runner.go:195] Run: systemctl --version
	I0803 23:55:56.176713  351470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:56.202453  351470 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:56.202489  351470 api_server.go:166] Checking apiserver status ...
	I0803 23:55:56.202526  351470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:56.219816  351470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:55:56.230723  351470 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:56.230792  351470 ssh_runner.go:195] Run: ls
	I0803 23:55:56.236502  351470 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:56.240978  351470 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:56.241020  351470 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:55:56.241034  351470 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:56.241078  351470 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:55:56.241398  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.241450  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.258571  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0803 23:55:56.259060  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.259606  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.259639  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.259985  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.260252  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:55:56.262084  351470 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0803 23:55:56.262101  351470 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:56.262418  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.262461  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.277866  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40295
	I0803 23:55:56.278405  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.278991  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.279014  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.279359  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.279620  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:55:56.282617  351470 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:56.283137  351470 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:56.283159  351470 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:56.283306  351470 host.go:66] Checking if "ha-349588-m02" exists ...
	I0803 23:55:56.283667  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:56.283721  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:56.300130  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0803 23:55:56.300696  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:56.301282  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:56.301309  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:56.301676  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:56.301888  351470 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:55:56.302079  351470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:56.302101  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:55:56.304647  351470 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:56.305182  351470 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:55:56.305209  351470 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:55:56.305368  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:55:56.305571  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:55:56.305745  351470 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:55:56.305924  351470 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	W0803 23:55:59.373781  351470 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.67:22: connect: no route to host
	W0803 23:55:59.373919  351470 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	E0803 23:55:59.373958  351470 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:59.373968  351470 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0803 23:55:59.373995  351470 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.67:22: connect: no route to host
	I0803 23:55:59.374009  351470 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:55:59.374637  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.374737  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.390678  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0803 23:55:59.391213  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.391805  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.391828  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.392171  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.392376  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:55:59.394316  351470 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:55:59.394351  351470 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:59.394676  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.394723  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.409970  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I0803 23:55:59.410510  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.411174  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.411203  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.411549  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.411773  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:55:59.414778  351470 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:59.415359  351470 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:59.415402  351470 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:59.415504  351470 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:55:59.415902  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.415947  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.431207  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43921
	I0803 23:55:59.431598  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.432146  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.432168  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.432485  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.432700  351470 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:55:59.432888  351470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:59.432914  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:55:59.435754  351470 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:59.436145  351470 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:55:59.436199  351470 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:55:59.436316  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:55:59.436513  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:55:59.436647  351470 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:55:59.436790  351470 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:55:59.521487  351470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:59.538153  351470 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:55:59.538185  351470 api_server.go:166] Checking apiserver status ...
	I0803 23:55:59.538226  351470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:55:59.553072  351470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:55:59.563143  351470 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:55:59.563200  351470 ssh_runner.go:195] Run: ls
	I0803 23:55:59.568051  351470 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:55:59.574448  351470 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:55:59.574475  351470 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:55:59.574485  351470 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:55:59.574500  351470 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:55:59.574819  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.574853  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.591538  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0803 23:55:59.592064  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.592578  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.592606  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.593003  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.593231  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:55:59.594951  351470 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:55:59.594967  351470 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:59.595251  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.595285  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.611944  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
	I0803 23:55:59.612444  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.612944  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.612969  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.613296  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.613484  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:55:59.616323  351470 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:59.616783  351470 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:59.616808  351470 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:59.616949  351470 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:55:59.617251  351470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:55:59.617293  351470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:55:59.633803  351470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0803 23:55:59.634245  351470 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:55:59.634742  351470 main.go:141] libmachine: Using API Version  1
	I0803 23:55:59.634767  351470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:55:59.635207  351470 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:55:59.635462  351470 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:55:59.635680  351470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:55:59.635707  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:55:59.638636  351470 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:59.639084  351470 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:55:59.639112  351470 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:55:59.639280  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:55:59.639454  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:55:59.639638  351470 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:55:59.639818  351470 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:55:59.721150  351470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:55:59.737546  351470 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 7 (646.219618ms)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-349588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:56:06.618883  351622 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:56:06.619180  351622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:56:06.619192  351622 out.go:304] Setting ErrFile to fd 2...
	I0803 23:56:06.619198  351622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:56:06.619376  351622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:56:06.619578  351622 out.go:298] Setting JSON to false
	I0803 23:56:06.619614  351622 mustload.go:65] Loading cluster: ha-349588
	I0803 23:56:06.619663  351622 notify.go:220] Checking for updates...
	I0803 23:56:06.620111  351622 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:56:06.620137  351622 status.go:255] checking status of ha-349588 ...
	I0803 23:56:06.620569  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.620654  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.636125  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0803 23:56:06.636622  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.637244  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.637266  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.637784  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.638017  351622 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:56:06.640015  351622 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0803 23:56:06.640035  351622 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:56:06.640467  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.640523  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.656195  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36735
	I0803 23:56:06.656625  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.657105  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.657132  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.657457  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.657686  351622 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:56:06.660621  351622 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:56:06.661066  351622 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:56:06.661102  351622 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:56:06.661230  351622 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:56:06.661558  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.661594  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.677280  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I0803 23:56:06.677787  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.678270  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.678293  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.678601  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.678814  351622 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:56:06.679039  351622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:56:06.679064  351622 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:56:06.682236  351622 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:56:06.682752  351622 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:56:06.682795  351622 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:56:06.682959  351622 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:56:06.683131  351622 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:56:06.683267  351622 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:56:06.683407  351622 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:56:06.762480  351622 ssh_runner.go:195] Run: systemctl --version
	I0803 23:56:06.768895  351622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:56:06.786157  351622 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:56:06.786193  351622 api_server.go:166] Checking apiserver status ...
	I0803 23:56:06.786230  351622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:56:06.806673  351622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0803 23:56:06.816845  351622 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:56:06.816903  351622 ssh_runner.go:195] Run: ls
	I0803 23:56:06.822105  351622 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:56:06.832511  351622 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:56:06.832552  351622 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0803 23:56:06.832563  351622 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:56:06.832587  351622 status.go:255] checking status of ha-349588-m02 ...
	I0803 23:56:06.832936  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.832980  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.848530  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
	I0803 23:56:06.849105  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.849742  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.849775  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.850201  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.850406  351622 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:56:06.852234  351622 status.go:330] ha-349588-m02 host status = "Stopped" (err=<nil>)
	I0803 23:56:06.852253  351622 status.go:343] host is not running, skipping remaining checks
	I0803 23:56:06.852262  351622 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:56:06.852286  351622 status.go:255] checking status of ha-349588-m03 ...
	I0803 23:56:06.852642  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.852691  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.868791  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0803 23:56:06.869310  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.869810  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.869836  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.870165  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.870355  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:56:06.872015  351622 status.go:330] ha-349588-m03 host status = "Running" (err=<nil>)
	I0803 23:56:06.872037  351622 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:56:06.872344  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.872377  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.888602  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0803 23:56:06.889127  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.889654  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.889682  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.890040  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.890274  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:56:06.893322  351622 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:06.893737  351622 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:56:06.893767  351622 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:06.893902  351622 host.go:66] Checking if "ha-349588-m03" exists ...
	I0803 23:56:06.894239  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:06.894286  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:06.911226  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I0803 23:56:06.911698  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:06.912205  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:06.912227  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:06.912517  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:06.912696  351622 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:56:06.912870  351622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:56:06.912891  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:56:06.915762  351622 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:06.916208  351622 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:56:06.916233  351622 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:06.916351  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:56:06.916503  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:56:06.916661  351622 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:56:06.916864  351622 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:56:07.007039  351622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:56:07.021947  351622 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0803 23:56:07.021991  351622 api_server.go:166] Checking apiserver status ...
	I0803 23:56:07.022050  351622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:56:07.037638  351622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup
	W0803 23:56:07.048071  351622 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1567/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:56:07.048152  351622 ssh_runner.go:195] Run: ls
	I0803 23:56:07.053135  351622 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0803 23:56:07.057560  351622 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0803 23:56:07.057590  351622 status.go:422] ha-349588-m03 apiserver status = Running (err=<nil>)
	I0803 23:56:07.057599  351622 status.go:257] ha-349588-m03 status: &{Name:ha-349588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:56:07.057616  351622 status.go:255] checking status of ha-349588-m04 ...
	I0803 23:56:07.057908  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:07.057948  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:07.073787  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0803 23:56:07.074309  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:07.074836  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:07.074879  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:07.075226  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:07.075439  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:56:07.077075  351622 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0803 23:56:07.077093  351622 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:56:07.077389  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:07.077451  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:07.093561  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0803 23:56:07.094124  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:07.094611  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:07.094637  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:07.094941  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:07.095121  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0803 23:56:07.097940  351622 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:07.098409  351622 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:56:07.098450  351622 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:07.098607  351622 host.go:66] Checking if "ha-349588-m04" exists ...
	I0803 23:56:07.098915  351622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:07.098957  351622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:07.115797  351622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0803 23:56:07.116252  351622 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:07.116792  351622 main.go:141] libmachine: Using API Version  1
	I0803 23:56:07.116817  351622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:07.117111  351622 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:07.117304  351622 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:56:07.117531  351622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:56:07.117553  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:56:07.120364  351622 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:07.120785  351622 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:56:07.120815  351622 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:07.120951  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:56:07.121140  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:56:07.121311  351622 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:56:07.121453  351622 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:56:07.201252  351622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:56:07.218085  351622 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-349588 -n ha-349588
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-349588 logs -n 25: (1.461286122s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m03_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m04 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp testdata/cp-test.txt                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m04_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03:/home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m03 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-349588 node stop m02 -v=7                                                     | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-349588 node start m02 -v=7                                                    | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:48:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:48:09.418625  346092 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:48:09.418752  346092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:48:09.418762  346092 out.go:304] Setting ErrFile to fd 2...
	I0803 23:48:09.418768  346092 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:48:09.418971  346092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:48:09.419574  346092 out.go:298] Setting JSON to false
	I0803 23:48:09.420569  346092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30637,"bootTime":1722698252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:48:09.420633  346092 start.go:139] virtualization: kvm guest
	I0803 23:48:09.422786  346092 out.go:177] * [ha-349588] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:48:09.424092  346092 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:48:09.424144  346092 notify.go:220] Checking for updates...
	I0803 23:48:09.426416  346092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:48:09.427707  346092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:48:09.429120  346092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.430526  346092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:48:09.431632  346092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:48:09.432954  346092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:48:09.470138  346092 out.go:177] * Using the kvm2 driver based on user configuration
	I0803 23:48:09.471317  346092 start.go:297] selected driver: kvm2
	I0803 23:48:09.471334  346092 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:48:09.471347  346092 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:48:09.472158  346092 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:48:09.472262  346092 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:48:09.488603  346092 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:48:09.488655  346092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:48:09.488888  346092 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:48:09.488957  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:09.488969  346092 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0803 23:48:09.488977  346092 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 23:48:09.489047  346092 start.go:340] cluster config:
	{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0803 23:48:09.489163  346092 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:48:09.490895  346092 out.go:177] * Starting "ha-349588" primary control-plane node in "ha-349588" cluster
	I0803 23:48:09.491984  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:48:09.492039  346092 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:48:09.492063  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:48:09.492163  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:48:09.492174  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:48:09.492520  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:48:09.492548  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json: {Name:mk903cfda9df964846737e7e0ecec8ea46a5827c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:09.492717  346092 start.go:360] acquireMachinesLock for ha-349588: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:48:09.492747  346092 start.go:364] duration metric: took 17.293µs to acquireMachinesLock for "ha-349588"
	I0803 23:48:09.492765  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:48:09.492824  346092 start.go:125] createHost starting for "" (driver="kvm2")
	I0803 23:48:09.494421  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:48:09.494578  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:48:09.494618  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:48:09.509993  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0803 23:48:09.510451  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:48:09.511049  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:48:09.511070  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:48:09.511439  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:48:09.511701  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:09.511862  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:09.512020  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:48:09.512050  346092 client.go:168] LocalClient.Create starting
	I0803 23:48:09.512089  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:48:09.512149  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:48:09.512174  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:48:09.512252  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:48:09.512279  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:48:09.512295  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:48:09.512320  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:48:09.512339  346092 main.go:141] libmachine: (ha-349588) Calling .PreCreateCheck
	I0803 23:48:09.512682  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:09.513102  346092 main.go:141] libmachine: Creating machine...
	I0803 23:48:09.513120  346092 main.go:141] libmachine: (ha-349588) Calling .Create
	I0803 23:48:09.513250  346092 main.go:141] libmachine: (ha-349588) Creating KVM machine...
	I0803 23:48:09.514581  346092 main.go:141] libmachine: (ha-349588) DBG | found existing default KVM network
	I0803 23:48:09.515280  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.515113  346115 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d990}
	I0803 23:48:09.515306  346092 main.go:141] libmachine: (ha-349588) DBG | created network xml: 
	I0803 23:48:09.515319  346092 main.go:141] libmachine: (ha-349588) DBG | <network>
	I0803 23:48:09.515327  346092 main.go:141] libmachine: (ha-349588) DBG |   <name>mk-ha-349588</name>
	I0803 23:48:09.515341  346092 main.go:141] libmachine: (ha-349588) DBG |   <dns enable='no'/>
	I0803 23:48:09.515351  346092 main.go:141] libmachine: (ha-349588) DBG |   
	I0803 23:48:09.515360  346092 main.go:141] libmachine: (ha-349588) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0803 23:48:09.515366  346092 main.go:141] libmachine: (ha-349588) DBG |     <dhcp>
	I0803 23:48:09.515374  346092 main.go:141] libmachine: (ha-349588) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0803 23:48:09.515391  346092 main.go:141] libmachine: (ha-349588) DBG |     </dhcp>
	I0803 23:48:09.515412  346092 main.go:141] libmachine: (ha-349588) DBG |   </ip>
	I0803 23:48:09.515424  346092 main.go:141] libmachine: (ha-349588) DBG |   
	I0803 23:48:09.515430  346092 main.go:141] libmachine: (ha-349588) DBG | </network>
	I0803 23:48:09.515435  346092 main.go:141] libmachine: (ha-349588) DBG | 
	I0803 23:48:09.520559  346092 main.go:141] libmachine: (ha-349588) DBG | trying to create private KVM network mk-ha-349588 192.168.39.0/24...
	I0803 23:48:09.590357  346092 main.go:141] libmachine: (ha-349588) DBG | private KVM network mk-ha-349588 192.168.39.0/24 created
	I0803 23:48:09.590389  346092 main.go:141] libmachine: (ha-349588) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 ...
	I0803 23:48:09.590434  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.590305  346115 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.590461  346092 main.go:141] libmachine: (ha-349588) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:48:09.590489  346092 main.go:141] libmachine: (ha-349588) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:48:09.872162  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.871931  346115 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa...
	I0803 23:48:09.925823  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.925663  346115 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/ha-349588.rawdisk...
	I0803 23:48:09.925875  346092 main.go:141] libmachine: (ha-349588) DBG | Writing magic tar header
	I0803 23:48:09.925892  346092 main.go:141] libmachine: (ha-349588) DBG | Writing SSH key tar header
	I0803 23:48:09.925900  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:09.925798  346115 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 ...
	I0803 23:48:09.925912  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588
	I0803 23:48:09.925995  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588 (perms=drwx------)
	I0803 23:48:09.926018  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:48:09.926030  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:48:09.926051  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:48:09.926063  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:48:09.926077  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:48:09.926086  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:48:09.926094  346092 main.go:141] libmachine: (ha-349588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:48:09.926102  346092 main.go:141] libmachine: (ha-349588) Creating domain...
	I0803 23:48:09.926112  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:48:09.926120  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:48:09.926126  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:48:09.926135  346092 main.go:141] libmachine: (ha-349588) DBG | Checking permissions on dir: /home
	I0803 23:48:09.926168  346092 main.go:141] libmachine: (ha-349588) DBG | Skipping /home - not owner
	I0803 23:48:09.927340  346092 main.go:141] libmachine: (ha-349588) define libvirt domain using xml: 
	I0803 23:48:09.927363  346092 main.go:141] libmachine: (ha-349588) <domain type='kvm'>
	I0803 23:48:09.927373  346092 main.go:141] libmachine: (ha-349588)   <name>ha-349588</name>
	I0803 23:48:09.927384  346092 main.go:141] libmachine: (ha-349588)   <memory unit='MiB'>2200</memory>
	I0803 23:48:09.927393  346092 main.go:141] libmachine: (ha-349588)   <vcpu>2</vcpu>
	I0803 23:48:09.927402  346092 main.go:141] libmachine: (ha-349588)   <features>
	I0803 23:48:09.927414  346092 main.go:141] libmachine: (ha-349588)     <acpi/>
	I0803 23:48:09.927420  346092 main.go:141] libmachine: (ha-349588)     <apic/>
	I0803 23:48:09.927429  346092 main.go:141] libmachine: (ha-349588)     <pae/>
	I0803 23:48:09.927443  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927452  346092 main.go:141] libmachine: (ha-349588)   </features>
	I0803 23:48:09.927466  346092 main.go:141] libmachine: (ha-349588)   <cpu mode='host-passthrough'>
	I0803 23:48:09.927474  346092 main.go:141] libmachine: (ha-349588)   
	I0803 23:48:09.927485  346092 main.go:141] libmachine: (ha-349588)   </cpu>
	I0803 23:48:09.927493  346092 main.go:141] libmachine: (ha-349588)   <os>
	I0803 23:48:09.927500  346092 main.go:141] libmachine: (ha-349588)     <type>hvm</type>
	I0803 23:48:09.927509  346092 main.go:141] libmachine: (ha-349588)     <boot dev='cdrom'/>
	I0803 23:48:09.927519  346092 main.go:141] libmachine: (ha-349588)     <boot dev='hd'/>
	I0803 23:48:09.927528  346092 main.go:141] libmachine: (ha-349588)     <bootmenu enable='no'/>
	I0803 23:48:09.927541  346092 main.go:141] libmachine: (ha-349588)   </os>
	I0803 23:48:09.927557  346092 main.go:141] libmachine: (ha-349588)   <devices>
	I0803 23:48:09.927567  346092 main.go:141] libmachine: (ha-349588)     <disk type='file' device='cdrom'>
	I0803 23:48:09.927580  346092 main.go:141] libmachine: (ha-349588)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/boot2docker.iso'/>
	I0803 23:48:09.927595  346092 main.go:141] libmachine: (ha-349588)       <target dev='hdc' bus='scsi'/>
	I0803 23:48:09.927606  346092 main.go:141] libmachine: (ha-349588)       <readonly/>
	I0803 23:48:09.927615  346092 main.go:141] libmachine: (ha-349588)     </disk>
	I0803 23:48:09.927625  346092 main.go:141] libmachine: (ha-349588)     <disk type='file' device='disk'>
	I0803 23:48:09.927637  346092 main.go:141] libmachine: (ha-349588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:48:09.927655  346092 main.go:141] libmachine: (ha-349588)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/ha-349588.rawdisk'/>
	I0803 23:48:09.927670  346092 main.go:141] libmachine: (ha-349588)       <target dev='hda' bus='virtio'/>
	I0803 23:48:09.927681  346092 main.go:141] libmachine: (ha-349588)     </disk>
	I0803 23:48:09.927691  346092 main.go:141] libmachine: (ha-349588)     <interface type='network'>
	I0803 23:48:09.927707  346092 main.go:141] libmachine: (ha-349588)       <source network='mk-ha-349588'/>
	I0803 23:48:09.927717  346092 main.go:141] libmachine: (ha-349588)       <model type='virtio'/>
	I0803 23:48:09.927728  346092 main.go:141] libmachine: (ha-349588)     </interface>
	I0803 23:48:09.927743  346092 main.go:141] libmachine: (ha-349588)     <interface type='network'>
	I0803 23:48:09.927756  346092 main.go:141] libmachine: (ha-349588)       <source network='default'/>
	I0803 23:48:09.927766  346092 main.go:141] libmachine: (ha-349588)       <model type='virtio'/>
	I0803 23:48:09.927776  346092 main.go:141] libmachine: (ha-349588)     </interface>
	I0803 23:48:09.927786  346092 main.go:141] libmachine: (ha-349588)     <serial type='pty'>
	I0803 23:48:09.927795  346092 main.go:141] libmachine: (ha-349588)       <target port='0'/>
	I0803 23:48:09.927804  346092 main.go:141] libmachine: (ha-349588)     </serial>
	I0803 23:48:09.927829  346092 main.go:141] libmachine: (ha-349588)     <console type='pty'>
	I0803 23:48:09.927851  346092 main.go:141] libmachine: (ha-349588)       <target type='serial' port='0'/>
	I0803 23:48:09.927862  346092 main.go:141] libmachine: (ha-349588)     </console>
	I0803 23:48:09.927868  346092 main.go:141] libmachine: (ha-349588)     <rng model='virtio'>
	I0803 23:48:09.927877  346092 main.go:141] libmachine: (ha-349588)       <backend model='random'>/dev/random</backend>
	I0803 23:48:09.927883  346092 main.go:141] libmachine: (ha-349588)     </rng>
	I0803 23:48:09.927888  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927892  346092 main.go:141] libmachine: (ha-349588)     
	I0803 23:48:09.927898  346092 main.go:141] libmachine: (ha-349588)   </devices>
	I0803 23:48:09.927904  346092 main.go:141] libmachine: (ha-349588) </domain>
	I0803 23:48:09.927911  346092 main.go:141] libmachine: (ha-349588) 
	I0803 23:48:09.932195  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:c8:e6:c5 in network default
	I0803 23:48:09.932825  346092 main.go:141] libmachine: (ha-349588) Ensuring networks are active...
	I0803 23:48:09.932848  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:09.933616  346092 main.go:141] libmachine: (ha-349588) Ensuring network default is active
	I0803 23:48:09.933995  346092 main.go:141] libmachine: (ha-349588) Ensuring network mk-ha-349588 is active
	I0803 23:48:09.934553  346092 main.go:141] libmachine: (ha-349588) Getting domain xml...
	I0803 23:48:09.935413  346092 main.go:141] libmachine: (ha-349588) Creating domain...
	I0803 23:48:11.143458  346092 main.go:141] libmachine: (ha-349588) Waiting to get IP...
	I0803 23:48:11.144201  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.144600  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.144647  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.144590  346115 retry.go:31] will retry after 217.821157ms: waiting for machine to come up
	I0803 23:48:11.364303  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.364800  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.364827  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.364747  346115 retry.go:31] will retry after 290.305806ms: waiting for machine to come up
	I0803 23:48:11.656462  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.656882  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.656915  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.656819  346115 retry.go:31] will retry after 307.829475ms: waiting for machine to come up
	I0803 23:48:11.966421  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:11.966824  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:11.966854  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:11.966778  346115 retry.go:31] will retry after 424.675082ms: waiting for machine to come up
	I0803 23:48:12.393572  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:12.394043  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:12.394075  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:12.393985  346115 retry.go:31] will retry after 469.819501ms: waiting for machine to come up
	I0803 23:48:12.865672  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:12.866068  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:12.866113  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:12.866030  346115 retry.go:31] will retry after 703.183302ms: waiting for machine to come up
	I0803 23:48:13.571033  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:13.571450  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:13.571536  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:13.571432  346115 retry.go:31] will retry after 1.123702351s: waiting for machine to come up
	I0803 23:48:14.696577  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:14.697000  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:14.697052  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:14.696947  346115 retry.go:31] will retry after 1.12664628s: waiting for machine to come up
	I0803 23:48:15.824971  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:15.825444  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:15.825471  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:15.825370  346115 retry.go:31] will retry after 1.337432737s: waiting for machine to come up
	I0803 23:48:17.164972  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:17.165341  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:17.165365  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:17.165301  346115 retry.go:31] will retry after 1.584311544s: waiting for machine to come up
	I0803 23:48:18.752092  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:18.752563  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:18.752599  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:18.752497  346115 retry.go:31] will retry after 2.404172369s: waiting for machine to come up
	I0803 23:48:21.159266  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:21.159722  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:21.159746  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:21.159660  346115 retry.go:31] will retry after 3.566530198s: waiting for machine to come up
	I0803 23:48:24.727868  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:24.728217  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:24.728244  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:24.728197  346115 retry.go:31] will retry after 4.050810748s: waiting for machine to come up
	I0803 23:48:28.782752  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:28.783279  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find current IP address of domain ha-349588 in network mk-ha-349588
	I0803 23:48:28.783306  346092 main.go:141] libmachine: (ha-349588) DBG | I0803 23:48:28.783240  346115 retry.go:31] will retry after 4.340405118s: waiting for machine to come up
	I0803 23:48:33.126682  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.127152  346092 main.go:141] libmachine: (ha-349588) Found IP for machine: 192.168.39.168
	I0803 23:48:33.127176  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has current primary IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.127210  346092 main.go:141] libmachine: (ha-349588) Reserving static IP address...
	I0803 23:48:33.127440  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find host DHCP lease matching {name: "ha-349588", mac: "52:54:00:d9:f9:50", ip: "192.168.39.168"} in network mk-ha-349588
	I0803 23:48:33.206830  346092 main.go:141] libmachine: (ha-349588) DBG | Getting to WaitForSSH function...
	I0803 23:48:33.206864  346092 main.go:141] libmachine: (ha-349588) Reserved static IP address: 192.168.39.168
	I0803 23:48:33.206877  346092 main.go:141] libmachine: (ha-349588) Waiting for SSH to be available...
	I0803 23:48:33.209538  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:33.209926  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588
	I0803 23:48:33.209953  346092 main.go:141] libmachine: (ha-349588) DBG | unable to find defined IP address of network mk-ha-349588 interface with MAC address 52:54:00:d9:f9:50
	I0803 23:48:33.210115  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH client type: external
	I0803 23:48:33.210168  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa (-rw-------)
	I0803 23:48:33.210224  346092 main.go:141] libmachine: (ha-349588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:48:33.210244  346092 main.go:141] libmachine: (ha-349588) DBG | About to run SSH command:
	I0803 23:48:33.210260  346092 main.go:141] libmachine: (ha-349588) DBG | exit 0
	I0803 23:48:33.214010  346092 main.go:141] libmachine: (ha-349588) DBG | SSH cmd err, output: exit status 255: 
	I0803 23:48:33.214077  346092 main.go:141] libmachine: (ha-349588) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0803 23:48:33.214103  346092 main.go:141] libmachine: (ha-349588) DBG | command : exit 0
	I0803 23:48:33.214117  346092 main.go:141] libmachine: (ha-349588) DBG | err     : exit status 255
	I0803 23:48:33.214129  346092 main.go:141] libmachine: (ha-349588) DBG | output  : 
	I0803 23:48:36.215923  346092 main.go:141] libmachine: (ha-349588) DBG | Getting to WaitForSSH function...
	I0803 23:48:36.218572  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.218985  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.219013  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.219082  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH client type: external
	I0803 23:48:36.219103  346092 main.go:141] libmachine: (ha-349588) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa (-rw-------)
	I0803 23:48:36.219141  346092 main.go:141] libmachine: (ha-349588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:48:36.219157  346092 main.go:141] libmachine: (ha-349588) DBG | About to run SSH command:
	I0803 23:48:36.219170  346092 main.go:141] libmachine: (ha-349588) DBG | exit 0
	I0803 23:48:36.337862  346092 main.go:141] libmachine: (ha-349588) DBG | SSH cmd err, output: <nil>: 
	I0803 23:48:36.338165  346092 main.go:141] libmachine: (ha-349588) KVM machine creation complete!
	I0803 23:48:36.338513  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:36.339091  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:36.339286  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:36.339413  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:48:36.339424  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:48:36.340646  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:48:36.340662  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:48:36.340669  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:48:36.340676  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.342849  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.343214  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.343243  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.343346  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.343540  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.343678  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.343794  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.343956  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.344188  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.344202  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:48:36.441183  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:48:36.441208  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:48:36.441216  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.443990  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.444394  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.444424  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.444612  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.444811  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.444973  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.445104  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.445241  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.445426  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.445437  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:48:36.542286  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:48:36.542365  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:48:36.542371  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:48:36.542384  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.542688  346092 buildroot.go:166] provisioning hostname "ha-349588"
	I0803 23:48:36.542739  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.542966  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.545919  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.546337  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.546368  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.546552  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.546755  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.546913  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.547066  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.547213  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.547411  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.547426  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588 && echo "ha-349588" | sudo tee /etc/hostname
	I0803 23:48:36.660783  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:48:36.660811  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.663757  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.664197  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.664222  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.664426  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.664653  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.664851  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.664993  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.665167  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:36.665347  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:36.665362  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:48:36.771045  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:48:36.771076  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:48:36.771125  346092 buildroot.go:174] setting up certificates
	I0803 23:48:36.771143  346092 provision.go:84] configureAuth start
	I0803 23:48:36.771157  346092 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:48:36.771474  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:36.774122  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.774504  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.774536  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.774645  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.776986  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.777284  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.777333  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.777497  346092 provision.go:143] copyHostCerts
	I0803 23:48:36.777544  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:48:36.777581  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:48:36.777591  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:48:36.777659  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:48:36.777742  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:48:36.777760  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:48:36.777766  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:48:36.777790  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:48:36.777832  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:48:36.777848  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:48:36.777854  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:48:36.777874  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:48:36.777921  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588 san=[127.0.0.1 192.168.39.168 ha-349588 localhost minikube]
	I0803 23:48:36.891183  346092 provision.go:177] copyRemoteCerts
	I0803 23:48:36.891251  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:48:36.891279  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:36.894188  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.894510  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:36.894544  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:36.894727  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:36.894957  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:36.895157  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:36.895313  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:36.978456  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:48:36.978533  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:48:37.004096  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:48:37.004172  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0803 23:48:37.028761  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:48:37.028864  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:48:37.053090  346092 provision.go:87] duration metric: took 281.906542ms to configureAuth
	I0803 23:48:37.053131  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:48:37.053320  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:48:37.053406  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.056081  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.056541  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.056567  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.056725  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.056959  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.057168  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.057334  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.057499  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:37.057703  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:37.057719  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:48:37.329708  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:48:37.329744  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:48:37.329755  346092 main.go:141] libmachine: (ha-349588) Calling .GetURL
	I0803 23:48:37.331027  346092 main.go:141] libmachine: (ha-349588) DBG | Using libvirt version 6000000
	I0803 23:48:37.333248  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.333780  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.333808  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.333977  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:48:37.333999  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:48:37.334010  346092 client.go:171] duration metric: took 27.821945455s to LocalClient.Create
	I0803 23:48:37.334044  346092 start.go:167] duration metric: took 27.822025189s to libmachine.API.Create "ha-349588"
	I0803 23:48:37.334056  346092 start.go:293] postStartSetup for "ha-349588" (driver="kvm2")
	I0803 23:48:37.334065  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:48:37.334081  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.334393  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:48:37.334417  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.336642  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.336927  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.336953  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.337119  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.337290  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.337446  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.337616  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.416116  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:48:37.420420  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:48:37.420451  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:48:37.420522  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:48:37.420630  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:48:37.420645  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:48:37.420778  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:48:37.430694  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:48:37.459257  346092 start.go:296] duration metric: took 125.186102ms for postStartSetup
	I0803 23:48:37.459317  346092 main.go:141] libmachine: (ha-349588) Calling .GetConfigRaw
	I0803 23:48:37.459978  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:37.463817  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.464170  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.464194  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.464482  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:48:37.464696  346092 start.go:128] duration metric: took 27.971861416s to createHost
	I0803 23:48:37.464731  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.466929  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.467283  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.467311  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.467442  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.467641  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.467814  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.467939  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.468075  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:48:37.468271  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:48:37.468281  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:48:37.566205  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728917.546761844
	
	I0803 23:48:37.566231  346092 fix.go:216] guest clock: 1722728917.546761844
	I0803 23:48:37.566238  346092 fix.go:229] Guest: 2024-08-03 23:48:37.546761844 +0000 UTC Remote: 2024-08-03 23:48:37.464710805 +0000 UTC m=+28.082129480 (delta=82.051039ms)
	I0803 23:48:37.566259  346092 fix.go:200] guest clock delta is within tolerance: 82.051039ms
	I0803 23:48:37.566264  346092 start.go:83] releasing machines lock for "ha-349588", held for 28.07350849s
	I0803 23:48:37.566282  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.566552  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:37.569332  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.569715  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.569744  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.569912  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570398  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570564  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:48:37.570663  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:48:37.570703  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.570820  346092 ssh_runner.go:195] Run: cat /version.json
	I0803 23:48:37.570846  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:48:37.573409  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573687  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573810  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.573834  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.573988  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.574087  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:37.574132  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:37.574275  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:48:37.574307  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.574464  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.574495  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:48:37.574606  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:48:37.574618  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.574770  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:48:37.668834  346092 ssh_runner.go:195] Run: systemctl --version
	I0803 23:48:37.675150  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:48:37.834421  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:48:37.840819  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:48:37.840914  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:48:37.857582  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:48:37.857616  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:48:37.857725  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:48:37.875286  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:48:37.891205  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:48:37.891287  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:48:37.906903  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:48:37.922814  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:48:38.040844  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:48:38.187910  346092 docker.go:233] disabling docker service ...
	I0803 23:48:38.187983  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:48:38.203041  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:48:38.216953  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:48:38.356540  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:48:38.471367  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:48:38.485586  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:48:38.504903  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:48:38.504998  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.515915  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:48:38.515993  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.527084  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.538078  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.549280  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:48:38.560636  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.571734  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.589785  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:48:38.600819  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:48:38.610949  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:48:38.611026  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:48:38.624934  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:48:38.635005  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:48:38.748298  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:48:38.887792  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:48:38.887892  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:48:38.892998  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:48:38.893081  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:48:38.897088  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:48:38.935449  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:48:38.935539  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:48:38.965015  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:48:38.995381  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:48:38.996902  346092 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:48:38.999775  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:39.000151  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:48:39.000175  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:48:39.000430  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:48:39.004744  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:48:39.019060  346092 kubeadm.go:883] updating cluster {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:48:39.019244  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:48:39.019542  346092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:48:39.058144  346092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0803 23:48:39.058231  346092 ssh_runner.go:195] Run: which lz4
	I0803 23:48:39.062491  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0803 23:48:39.062602  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0803 23:48:39.066958  346092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0803 23:48:39.067007  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0803 23:48:40.528763  346092 crio.go:462] duration metric: took 1.466185715s to copy over tarball
	I0803 23:48:40.528870  346092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0803 23:48:42.691815  346092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.162903819s)
	I0803 23:48:42.691850  346092 crio.go:469] duration metric: took 2.163033059s to extract the tarball
	I0803 23:48:42.691861  346092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0803 23:48:42.730485  346092 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:48:42.778939  346092 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:48:42.778965  346092 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:48:42.778978  346092 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.30.3 crio true true} ...
	I0803 23:48:42.779117  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:48:42.779205  346092 ssh_runner.go:195] Run: crio config
	I0803 23:48:42.828670  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:42.828702  346092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:48:42.828719  346092 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:48:42.828744  346092 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-349588 NodeName:ha-349588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:48:42.828899  346092 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-349588"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:48:42.828929  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:48:42.828978  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:48:42.847620  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:48:42.847740  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:48:42.847794  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:48:42.858826  346092 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:48:42.858911  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:48:42.869873  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:48:42.888649  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:48:42.906568  346092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:48:42.924948  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0803 23:48:42.942182  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:48:42.946393  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:48:42.959877  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:48:43.090573  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:48:43.109673  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.168
	I0803 23:48:43.109707  346092 certs.go:194] generating shared ca certs ...
	I0803 23:48:43.109736  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.109935  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:48:43.109995  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:48:43.110016  346092 certs.go:256] generating profile certs ...
	I0803 23:48:43.110095  346092 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:48:43.110115  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt with IP's: []
	I0803 23:48:43.176202  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt ...
	I0803 23:48:43.176243  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt: {Name:mk8846af52ab7192f012806995ca5756c43d9aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.176414  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key ...
	I0803 23:48:43.176426  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key: {Name:mk3c59388753fea20f89d92bf03bdfc970c14c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.176505  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad
	I0803 23:48:43.176520  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.254]
	I0803 23:48:43.323353  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad ...
	I0803 23:48:43.323387  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad: {Name:mk8daf6ee6cbba709dc68563d6432752e9aeecab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.323547  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad ...
	I0803 23:48:43.323560  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad: {Name:mkbb0f47da156ebcc5042f70a6f380500f1cb64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.323633  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.92d048ad -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:48:43.323725  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.92d048ad -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:48:43.323784  346092 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:48:43.323798  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt with IP's: []
	I0803 23:48:43.488751  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt ...
	I0803 23:48:43.488786  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt: {Name:mk0d9fb306df1ed4b7eeba1f21c32111bb96f6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.488947  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key ...
	I0803 23:48:43.488958  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key: {Name:mkb23eaf410419e953894e823db49217d4b5f172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:48:43.489031  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:48:43.489065  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:48:43.489088  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:48:43.489102  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:48:43.489116  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:48:43.489130  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:48:43.489142  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:48:43.489155  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:48:43.489236  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:48:43.489273  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:48:43.489282  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:48:43.489304  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:48:43.489326  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:48:43.489346  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:48:43.489382  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:48:43.489420  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.489441  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.489453  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.490082  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:48:43.517725  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:48:43.543526  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:48:43.568742  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:48:43.594201  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:48:43.620048  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:48:43.644985  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:48:43.677146  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:48:43.703176  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:48:43.727682  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:48:43.752318  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:48:43.777458  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:48:43.794903  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:48:43.801030  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:48:43.812834  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.817558  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.817619  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:48:43.823759  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:48:43.834807  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:48:43.845824  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.850583  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.850645  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:48:43.856639  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:48:43.868327  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:48:43.880525  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.885320  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.885395  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:48:43.891760  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:48:43.906642  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:48:43.916110  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:48:43.916210  346092 kubeadm.go:392] StartCluster: {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:48:43.916326  346092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:48:43.916400  346092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:48:43.980609  346092 cri.go:89] found id: ""
	I0803 23:48:43.980701  346092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:48:43.996519  346092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 23:48:44.010446  346092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 23:48:44.024303  346092 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 23:48:44.024325  346092 kubeadm.go:157] found existing configuration files:
	
	I0803 23:48:44.024387  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 23:48:44.034689  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 23:48:44.034769  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 23:48:44.045767  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 23:48:44.056709  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 23:48:44.056781  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 23:48:44.067638  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 23:48:44.077701  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 23:48:44.077809  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 23:48:44.088461  346092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 23:48:44.098545  346092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 23:48:44.098609  346092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 23:48:44.109113  346092 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0803 23:48:44.215580  346092 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 23:48:44.215831  346092 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 23:48:44.347666  346092 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 23:48:44.347842  346092 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 23:48:44.348003  346092 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 23:48:44.559912  346092 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 23:48:44.623290  346092 out.go:204]   - Generating certificates and keys ...
	I0803 23:48:44.623399  346092 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 23:48:44.623498  346092 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 23:48:44.677872  346092 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 23:48:45.161412  346092 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 23:48:45.359698  346092 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 23:48:45.509093  346092 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 23:48:45.735838  346092 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 23:48:45.736073  346092 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-349588 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0803 23:48:45.868983  346092 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 23:48:45.869145  346092 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-349588 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I0803 23:48:45.939849  346092 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 23:48:46.488546  346092 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 23:48:46.609445  346092 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 23:48:46.609584  346092 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 23:48:46.913292  346092 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 23:48:47.075011  346092 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 23:48:47.280716  346092 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 23:48:47.381369  346092 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 23:48:47.425104  346092 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 23:48:47.425771  346092 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 23:48:47.430790  346092 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 23:48:47.432462  346092 out.go:204]   - Booting up control plane ...
	I0803 23:48:47.432572  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 23:48:47.432670  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 23:48:47.432761  346092 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 23:48:47.451505  346092 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 23:48:47.452518  346092 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 23:48:47.452610  346092 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 23:48:47.581671  346092 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 23:48:47.581778  346092 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 23:48:48.582643  346092 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001457349s
	I0803 23:48:48.582750  346092 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 23:48:54.290536  346092 kubeadm.go:310] [api-check] The API server is healthy after 5.710287165s
	I0803 23:48:54.303491  346092 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 23:48:54.323699  346092 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 23:48:54.354226  346092 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 23:48:54.354458  346092 kubeadm.go:310] [mark-control-plane] Marking the node ha-349588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 23:48:54.369768  346092 kubeadm.go:310] [bootstrap-token] Using token: vmd729.4ijgfu3uo5k2v1gw
	I0803 23:48:54.371222  346092 out.go:204]   - Configuring RBAC rules ...
	I0803 23:48:54.371383  346092 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 23:48:54.384821  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 23:48:54.400057  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 23:48:54.403731  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 23:48:54.408140  346092 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 23:48:54.416162  346092 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 23:48:54.701480  346092 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 23:48:55.157420  346092 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 23:48:55.698866  346092 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 23:48:55.698893  346092 kubeadm.go:310] 
	I0803 23:48:55.699005  346092 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 23:48:55.699032  346092 kubeadm.go:310] 
	I0803 23:48:55.699132  346092 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 23:48:55.699142  346092 kubeadm.go:310] 
	I0803 23:48:55.699181  346092 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 23:48:55.699255  346092 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 23:48:55.699320  346092 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 23:48:55.699330  346092 kubeadm.go:310] 
	I0803 23:48:55.699410  346092 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 23:48:55.699418  346092 kubeadm.go:310] 
	I0803 23:48:55.699495  346092 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 23:48:55.699514  346092 kubeadm.go:310] 
	I0803 23:48:55.699557  346092 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 23:48:55.699622  346092 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 23:48:55.699684  346092 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 23:48:55.699690  346092 kubeadm.go:310] 
	I0803 23:48:55.699764  346092 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 23:48:55.699833  346092 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 23:48:55.699839  346092 kubeadm.go:310] 
	I0803 23:48:55.699954  346092 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vmd729.4ijgfu3uo5k2v1gw \
	I0803 23:48:55.700069  346092 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c \
	I0803 23:48:55.700090  346092 kubeadm.go:310] 	--control-plane 
	I0803 23:48:55.700094  346092 kubeadm.go:310] 
	I0803 23:48:55.700168  346092 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 23:48:55.700181  346092 kubeadm.go:310] 
	I0803 23:48:55.700261  346092 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vmd729.4ijgfu3uo5k2v1gw \
	I0803 23:48:55.700368  346092 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c 
	I0803 23:48:55.701070  346092 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 23:48:55.701096  346092 cni.go:84] Creating CNI manager for ""
	I0803 23:48:55.701103  346092 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0803 23:48:55.702841  346092 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0803 23:48:55.704052  346092 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0803 23:48:55.709875  346092 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0803 23:48:55.709897  346092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0803 23:48:55.735301  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0803 23:48:56.119562  346092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 23:48:56.119694  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588 minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=true
	I0803 23:48:56.119699  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:56.147094  346092 ops.go:34] apiserver oom_adj: -16
	I0803 23:48:56.339836  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:56.840689  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:57.339892  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:57.840550  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:58.340862  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:58.840451  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:59.339991  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:48:59.840655  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:00.340322  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:00.840255  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:01.340913  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:01.840915  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:02.340929  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:02.840491  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:03.339993  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:03.840488  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:04.340852  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:04.840324  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:05.339902  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:05.840734  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:06.339995  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:06.840443  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.340744  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.840499  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 23:49:07.945567  346092 kubeadm.go:1113] duration metric: took 11.825936372s to wait for elevateKubeSystemPrivileges
	I0803 23:49:07.945620  346092 kubeadm.go:394] duration metric: took 24.029418072s to StartCluster
	I0803 23:49:07.945649  346092 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:07.945731  346092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:49:07.946576  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:07.946821  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 23:49:07.946833  346092 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:07.946857  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:49:07.946870  346092 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:49:07.946934  346092 addons.go:69] Setting storage-provisioner=true in profile "ha-349588"
	I0803 23:49:07.946952  346092 addons.go:69] Setting default-storageclass=true in profile "ha-349588"
	I0803 23:49:07.946967  346092 addons.go:234] Setting addon storage-provisioner=true in "ha-349588"
	I0803 23:49:07.946981  346092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-349588"
	I0803 23:49:07.946996  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:07.947103  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:07.947486  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.947486  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.947519  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.947532  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.963361  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
	I0803 23:49:07.963699  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0803 23:49:07.963950  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.964258  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.964486  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.964506  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.964822  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.964843  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.964892  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.965113  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:07.965322  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.965886  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.965916  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.967448  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:49:07.967821  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0803 23:49:07.968419  346092 cert_rotation.go:137] Starting client certificate rotation controller
	I0803 23:49:07.968706  346092 addons.go:234] Setting addon default-storageclass=true in "ha-349588"
	I0803 23:49:07.968758  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:07.969038  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.969082  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.981268  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0803 23:49:07.981760  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.982406  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.982449  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.982806  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.983026  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:07.984863  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:07.984975  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0803 23:49:07.985529  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:07.986001  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:07.986021  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:07.986315  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:07.986782  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:07.986817  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:07.986999  346092 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:49:07.988373  346092 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:49:07.988390  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:49:07.988406  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:07.991634  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:07.992145  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:07.992175  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:07.992401  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:07.992624  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:07.992802  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:07.992977  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:08.004028  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0803 23:49:08.004521  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:08.004986  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:08.005008  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:08.005356  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:08.005562  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:08.007175  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:08.007434  346092 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:49:08.007453  346092 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:49:08.007479  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:08.010032  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:08.010474  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:08.010504  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:08.010667  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:08.010853  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:08.011029  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:08.011165  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:08.072504  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 23:49:08.190173  346092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:49:08.206987  346092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:49:08.632401  346092 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0803 23:49:09.027910  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.027947  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.027987  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028010  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028254  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028269  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028278  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028285  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028315  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028333  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028343  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.028350  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.028534  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028546  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028690  346092 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0803 23:49:09.028698  346092 round_trippers.go:469] Request Headers:
	I0803 23:49:09.028710  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:49:09.028714  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:49:09.028850  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.028876  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.028916  346092 main.go:141] libmachine: (ha-349588) DBG | Closing plugin on server side
	I0803 23:49:09.058640  346092 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0803 23:49:09.059232  346092 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0803 23:49:09.059248  346092 round_trippers.go:469] Request Headers:
	I0803 23:49:09.059257  346092 round_trippers.go:473]     Content-Type: application/json
	I0803 23:49:09.059262  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:49:09.059267  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:49:09.065033  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:49:09.065333  346092 main.go:141] libmachine: Making call to close driver server
	I0803 23:49:09.065358  346092 main.go:141] libmachine: (ha-349588) Calling .Close
	I0803 23:49:09.065669  346092 main.go:141] libmachine: Successfully made call to close driver server
	I0803 23:49:09.065690  346092 main.go:141] libmachine: Making call to close connection to plugin binary
	I0803 23:49:09.067539  346092 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0803 23:49:09.068807  346092 addons.go:510] duration metric: took 1.121931039s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0803 23:49:09.068867  346092 start.go:246] waiting for cluster config update ...
	I0803 23:49:09.068886  346092 start.go:255] writing updated cluster config ...
	I0803 23:49:09.070612  346092 out.go:177] 
	I0803 23:49:09.072304  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:09.072402  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:09.074300  346092 out.go:177] * Starting "ha-349588-m02" control-plane node in "ha-349588" cluster
	I0803 23:49:09.075715  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:49:09.075749  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:49:09.075867  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:49:09.075882  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:49:09.075999  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:09.076295  346092 start.go:360] acquireMachinesLock for ha-349588-m02: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:49:09.076362  346092 start.go:364] duration metric: took 35.831µs to acquireMachinesLock for "ha-349588-m02"
	I0803 23:49:09.076384  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:09.076493  346092 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0803 23:49:09.079072  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:49:09.079194  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:09.079228  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:09.095204  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0803 23:49:09.095749  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:09.096322  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:09.096359  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:09.096849  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:09.097061  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:09.097232  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:09.097411  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:49:09.097439  346092 client.go:168] LocalClient.Create starting
	I0803 23:49:09.097476  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:49:09.097545  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:49:09.097588  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:49:09.097649  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:49:09.097670  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:49:09.097681  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:49:09.097699  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:49:09.097715  346092 main.go:141] libmachine: (ha-349588-m02) Calling .PreCreateCheck
	I0803 23:49:09.097887  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:09.098333  346092 main.go:141] libmachine: Creating machine...
	I0803 23:49:09.098349  346092 main.go:141] libmachine: (ha-349588-m02) Calling .Create
	I0803 23:49:09.098480  346092 main.go:141] libmachine: (ha-349588-m02) Creating KVM machine...
	I0803 23:49:09.099793  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found existing default KVM network
	I0803 23:49:09.099966  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found existing private KVM network mk-ha-349588
	I0803 23:49:09.100121  346092 main.go:141] libmachine: (ha-349588-m02) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 ...
	I0803 23:49:09.100150  346092 main.go:141] libmachine: (ha-349588-m02) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:49:09.100241  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.100120  346511 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:49:09.100390  346092 main.go:141] libmachine: (ha-349588-m02) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:49:09.392884  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.392736  346511 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa...
	I0803 23:49:09.506019  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.505833  346511 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/ha-349588-m02.rawdisk...
	I0803 23:49:09.506078  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Writing magic tar header
	I0803 23:49:09.506097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Writing SSH key tar header
	I0803 23:49:09.506110  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:09.505972  346511 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 ...
	I0803 23:49:09.506140  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02
	I0803 23:49:09.506180  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:49:09.506197  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02 (perms=drwx------)
	I0803 23:49:09.506214  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:49:09.506255  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:49:09.506271  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:49:09.506284  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:49:09.506299  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:49:09.506311  346092 main.go:141] libmachine: (ha-349588-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:49:09.506322  346092 main.go:141] libmachine: (ha-349588-m02) Creating domain...
	I0803 23:49:09.506337  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:49:09.506348  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:49:09.506358  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:49:09.506369  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Checking permissions on dir: /home
	I0803 23:49:09.506379  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Skipping /home - not owner
	I0803 23:49:09.507461  346092 main.go:141] libmachine: (ha-349588-m02) define libvirt domain using xml: 
	I0803 23:49:09.507488  346092 main.go:141] libmachine: (ha-349588-m02) <domain type='kvm'>
	I0803 23:49:09.507498  346092 main.go:141] libmachine: (ha-349588-m02)   <name>ha-349588-m02</name>
	I0803 23:49:09.507513  346092 main.go:141] libmachine: (ha-349588-m02)   <memory unit='MiB'>2200</memory>
	I0803 23:49:09.507523  346092 main.go:141] libmachine: (ha-349588-m02)   <vcpu>2</vcpu>
	I0803 23:49:09.507530  346092 main.go:141] libmachine: (ha-349588-m02)   <features>
	I0803 23:49:09.507539  346092 main.go:141] libmachine: (ha-349588-m02)     <acpi/>
	I0803 23:49:09.507547  346092 main.go:141] libmachine: (ha-349588-m02)     <apic/>
	I0803 23:49:09.507555  346092 main.go:141] libmachine: (ha-349588-m02)     <pae/>
	I0803 23:49:09.507564  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.507572  346092 main.go:141] libmachine: (ha-349588-m02)   </features>
	I0803 23:49:09.507586  346092 main.go:141] libmachine: (ha-349588-m02)   <cpu mode='host-passthrough'>
	I0803 23:49:09.507596  346092 main.go:141] libmachine: (ha-349588-m02)   
	I0803 23:49:09.507604  346092 main.go:141] libmachine: (ha-349588-m02)   </cpu>
	I0803 23:49:09.507615  346092 main.go:141] libmachine: (ha-349588-m02)   <os>
	I0803 23:49:09.507625  346092 main.go:141] libmachine: (ha-349588-m02)     <type>hvm</type>
	I0803 23:49:09.507633  346092 main.go:141] libmachine: (ha-349588-m02)     <boot dev='cdrom'/>
	I0803 23:49:09.507643  346092 main.go:141] libmachine: (ha-349588-m02)     <boot dev='hd'/>
	I0803 23:49:09.507655  346092 main.go:141] libmachine: (ha-349588-m02)     <bootmenu enable='no'/>
	I0803 23:49:09.507688  346092 main.go:141] libmachine: (ha-349588-m02)   </os>
	I0803 23:49:09.507701  346092 main.go:141] libmachine: (ha-349588-m02)   <devices>
	I0803 23:49:09.507709  346092 main.go:141] libmachine: (ha-349588-m02)     <disk type='file' device='cdrom'>
	I0803 23:49:09.507752  346092 main.go:141] libmachine: (ha-349588-m02)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/boot2docker.iso'/>
	I0803 23:49:09.507779  346092 main.go:141] libmachine: (ha-349588-m02)       <target dev='hdc' bus='scsi'/>
	I0803 23:49:09.507793  346092 main.go:141] libmachine: (ha-349588-m02)       <readonly/>
	I0803 23:49:09.507804  346092 main.go:141] libmachine: (ha-349588-m02)     </disk>
	I0803 23:49:09.507816  346092 main.go:141] libmachine: (ha-349588-m02)     <disk type='file' device='disk'>
	I0803 23:49:09.507829  346092 main.go:141] libmachine: (ha-349588-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:49:09.507845  346092 main.go:141] libmachine: (ha-349588-m02)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/ha-349588-m02.rawdisk'/>
	I0803 23:49:09.507856  346092 main.go:141] libmachine: (ha-349588-m02)       <target dev='hda' bus='virtio'/>
	I0803 23:49:09.507867  346092 main.go:141] libmachine: (ha-349588-m02)     </disk>
	I0803 23:49:09.507877  346092 main.go:141] libmachine: (ha-349588-m02)     <interface type='network'>
	I0803 23:49:09.507891  346092 main.go:141] libmachine: (ha-349588-m02)       <source network='mk-ha-349588'/>
	I0803 23:49:09.507916  346092 main.go:141] libmachine: (ha-349588-m02)       <model type='virtio'/>
	I0803 23:49:09.507946  346092 main.go:141] libmachine: (ha-349588-m02)     </interface>
	I0803 23:49:09.507967  346092 main.go:141] libmachine: (ha-349588-m02)     <interface type='network'>
	I0803 23:49:09.507982  346092 main.go:141] libmachine: (ha-349588-m02)       <source network='default'/>
	I0803 23:49:09.507992  346092 main.go:141] libmachine: (ha-349588-m02)       <model type='virtio'/>
	I0803 23:49:09.508003  346092 main.go:141] libmachine: (ha-349588-m02)     </interface>
	I0803 23:49:09.508013  346092 main.go:141] libmachine: (ha-349588-m02)     <serial type='pty'>
	I0803 23:49:09.508026  346092 main.go:141] libmachine: (ha-349588-m02)       <target port='0'/>
	I0803 23:49:09.508041  346092 main.go:141] libmachine: (ha-349588-m02)     </serial>
	I0803 23:49:09.508068  346092 main.go:141] libmachine: (ha-349588-m02)     <console type='pty'>
	I0803 23:49:09.508087  346092 main.go:141] libmachine: (ha-349588-m02)       <target type='serial' port='0'/>
	I0803 23:49:09.508100  346092 main.go:141] libmachine: (ha-349588-m02)     </console>
	I0803 23:49:09.508108  346092 main.go:141] libmachine: (ha-349588-m02)     <rng model='virtio'>
	I0803 23:49:09.508122  346092 main.go:141] libmachine: (ha-349588-m02)       <backend model='random'>/dev/random</backend>
	I0803 23:49:09.508132  346092 main.go:141] libmachine: (ha-349588-m02)     </rng>
	I0803 23:49:09.508140  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.508149  346092 main.go:141] libmachine: (ha-349588-m02)     
	I0803 23:49:09.508162  346092 main.go:141] libmachine: (ha-349588-m02)   </devices>
	I0803 23:49:09.508174  346092 main.go:141] libmachine: (ha-349588-m02) </domain>
	I0803 23:49:09.508186  346092 main.go:141] libmachine: (ha-349588-m02) 
	I0803 23:49:09.515269  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:7a:63:bd in network default
	I0803 23:49:09.515943  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring networks are active...
	I0803 23:49:09.515967  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:09.516868  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring network default is active
	I0803 23:49:09.517308  346092 main.go:141] libmachine: (ha-349588-m02) Ensuring network mk-ha-349588 is active
	I0803 23:49:09.517724  346092 main.go:141] libmachine: (ha-349588-m02) Getting domain xml...
	I0803 23:49:09.518644  346092 main.go:141] libmachine: (ha-349588-m02) Creating domain...
	I0803 23:49:10.755347  346092 main.go:141] libmachine: (ha-349588-m02) Waiting to get IP...
	I0803 23:49:10.756208  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:10.756617  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:10.756647  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:10.756592  346511 retry.go:31] will retry after 196.457708ms: waiting for machine to come up
	I0803 23:49:10.955123  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:10.955579  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:10.955605  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:10.955538  346511 retry.go:31] will retry after 314.513004ms: waiting for machine to come up
	I0803 23:49:11.272300  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:11.272803  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:11.272840  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:11.272737  346511 retry.go:31] will retry after 311.291518ms: waiting for machine to come up
	I0803 23:49:11.585254  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:11.585799  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:11.585830  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:11.585723  346511 retry.go:31] will retry after 523.229806ms: waiting for machine to come up
	I0803 23:49:12.110649  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:12.111090  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:12.111117  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:12.111031  346511 retry.go:31] will retry after 594.349932ms: waiting for machine to come up
	I0803 23:49:12.706604  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:12.707015  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:12.707046  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:12.706950  346511 retry.go:31] will retry after 579.421708ms: waiting for machine to come up
	I0803 23:49:13.287722  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:13.288146  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:13.288173  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:13.288098  346511 retry.go:31] will retry after 832.78526ms: waiting for machine to come up
	I0803 23:49:14.122636  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:14.123072  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:14.123097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:14.123032  346511 retry.go:31] will retry after 1.40942689s: waiting for machine to come up
	I0803 23:49:15.534952  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:15.535443  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:15.535483  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:15.535391  346511 retry.go:31] will retry after 1.773682348s: waiting for machine to come up
	I0803 23:49:17.310303  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:17.310693  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:17.310720  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:17.310649  346511 retry.go:31] will retry after 2.230324158s: waiting for machine to come up
	I0803 23:49:19.542820  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:19.543326  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:19.543357  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:19.543274  346511 retry.go:31] will retry after 2.161656606s: waiting for machine to come up
	I0803 23:49:21.706940  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:21.707447  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:21.707472  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:21.707412  346511 retry.go:31] will retry after 2.578584432s: waiting for machine to come up
	I0803 23:49:24.287397  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:24.287819  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:24.287849  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:24.287767  346511 retry.go:31] will retry after 3.341759682s: waiting for machine to come up
	I0803 23:49:27.633275  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:27.633768  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find current IP address of domain ha-349588-m02 in network mk-ha-349588
	I0803 23:49:27.634003  346092 main.go:141] libmachine: (ha-349588-m02) DBG | I0803 23:49:27.633715  346511 retry.go:31] will retry after 4.956950166s: waiting for machine to come up
	I0803 23:49:32.592015  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.592451  346092 main.go:141] libmachine: (ha-349588-m02) Found IP for machine: 192.168.39.67
	I0803 23:49:32.592471  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has current primary IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.592477  346092 main.go:141] libmachine: (ha-349588-m02) Reserving static IP address...
	I0803 23:49:32.592850  346092 main.go:141] libmachine: (ha-349588-m02) DBG | unable to find host DHCP lease matching {name: "ha-349588-m02", mac: "52:54:00:c5:a2:30", ip: "192.168.39.67"} in network mk-ha-349588
	I0803 23:49:32.671097  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Getting to WaitForSSH function...
	I0803 23:49:32.671127  346092 main.go:141] libmachine: (ha-349588-m02) Reserved static IP address: 192.168.39.67
	I0803 23:49:32.671140  346092 main.go:141] libmachine: (ha-349588-m02) Waiting for SSH to be available...
	I0803 23:49:32.674109  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.674564  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.674591  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.674755  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using SSH client type: external
	I0803 23:49:32.674775  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa (-rw-------)
	I0803 23:49:32.674832  346092 main.go:141] libmachine: (ha-349588-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:49:32.674867  346092 main.go:141] libmachine: (ha-349588-m02) DBG | About to run SSH command:
	I0803 23:49:32.674884  346092 main.go:141] libmachine: (ha-349588-m02) DBG | exit 0
	I0803 23:49:32.797788  346092 main.go:141] libmachine: (ha-349588-m02) DBG | SSH cmd err, output: <nil>: 
	I0803 23:49:32.798082  346092 main.go:141] libmachine: (ha-349588-m02) KVM machine creation complete!
	I0803 23:49:32.798388  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:32.798971  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:32.799188  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:32.799417  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:49:32.799435  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0803 23:49:32.800912  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:49:32.800962  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:49:32.800974  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:49:32.800983  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:32.803140  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.803577  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.803607  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.803746  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:32.803953  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.804142  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.804329  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:32.804496  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:32.804723  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:32.804734  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:49:32.905099  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:49:32.905127  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:49:32.905135  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:32.908228  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.908609  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:32.908632  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:32.908801  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:32.909041  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.909218  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:32.909340  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:32.909571  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:32.909828  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:32.909842  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:49:33.015154  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:49:33.015233  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:49:33.015243  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:49:33.015251  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.015518  346092 buildroot.go:166] provisioning hostname "ha-349588-m02"
	I0803 23:49:33.015547  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.015814  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.018501  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.018958  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.018992  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.019139  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.019322  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.019503  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.019644  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.019793  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.019965  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.019979  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588-m02 && echo "ha-349588-m02" | sudo tee /etc/hostname
	I0803 23:49:33.137434  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588-m02
	
	I0803 23:49:33.137463  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.140293  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.140675  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.140702  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.140906  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.141108  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.141288  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.141456  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.141647  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.141861  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.141889  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:49:33.255392  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:49:33.255436  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:49:33.255459  346092 buildroot.go:174] setting up certificates
	I0803 23:49:33.255472  346092 provision.go:84] configureAuth start
	I0803 23:49:33.255487  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetMachineName
	I0803 23:49:33.255787  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:33.258331  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.258649  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.258678  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.258872  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.260786  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.261093  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.261133  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.261245  346092 provision.go:143] copyHostCerts
	I0803 23:49:33.261291  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:49:33.261335  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:49:33.261348  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:49:33.261441  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:49:33.261586  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:49:33.261610  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:49:33.261617  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:49:33.261649  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:49:33.261693  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:49:33.261709  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:49:33.261715  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:49:33.261736  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:49:33.261796  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588-m02 san=[127.0.0.1 192.168.39.67 ha-349588-m02 localhost minikube]
	I0803 23:49:33.513319  346092 provision.go:177] copyRemoteCerts
	I0803 23:49:33.513401  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:49:33.513438  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.516462  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.516819  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.516856  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.517011  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.517238  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.517393  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.517618  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:33.600517  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:49:33.600605  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:49:33.630035  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:49:33.630119  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:49:33.659132  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:49:33.659199  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:49:33.693181  346092 provision.go:87] duration metric: took 437.692464ms to configureAuth
	I0803 23:49:33.693210  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:49:33.693426  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:33.693563  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.696446  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.696934  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.696969  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.697212  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.697497  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.697727  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.698031  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.698216  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:33.698403  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:33.698424  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:49:33.959540  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:49:33.959581  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:49:33.959593  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetURL
	I0803 23:49:33.960929  346092 main.go:141] libmachine: (ha-349588-m02) DBG | Using libvirt version 6000000
	I0803 23:49:33.963512  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.963899  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.963929  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.964106  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:49:33.964121  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:49:33.964130  346092 client.go:171] duration metric: took 24.866682664s to LocalClient.Create
	I0803 23:49:33.964160  346092 start.go:167] duration metric: took 24.866749901s to libmachine.API.Create "ha-349588"
	I0803 23:49:33.964172  346092 start.go:293] postStartSetup for "ha-349588-m02" (driver="kvm2")
	I0803 23:49:33.964187  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:49:33.964221  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:33.964513  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:49:33.964545  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:33.966907  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.967233  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:33.967264  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:33.967403  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:33.967604  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:33.967771  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:33.967915  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.048442  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:49:34.052707  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:49:34.052737  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:49:34.052818  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:49:34.052927  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:49:34.052941  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:49:34.053050  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:49:34.063171  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:49:34.087600  346092 start.go:296] duration metric: took 123.413254ms for postStartSetup
	I0803 23:49:34.087662  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetConfigRaw
	I0803 23:49:34.088269  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:34.091450  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.091855  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.091886  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.092198  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:49:34.092466  346092 start.go:128] duration metric: took 25.015961226s to createHost
	I0803 23:49:34.092491  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:34.094844  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.095181  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.095213  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.095338  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.095493  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.095609  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.095704  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.095817  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:49:34.096032  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0803 23:49:34.096044  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:49:34.198875  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722728974.174689570
	
	I0803 23:49:34.198914  346092 fix.go:216] guest clock: 1722728974.174689570
	I0803 23:49:34.198924  346092 fix.go:229] Guest: 2024-08-03 23:49:34.17468957 +0000 UTC Remote: 2024-08-03 23:49:34.092479911 +0000 UTC m=+84.709898585 (delta=82.209659ms)
	I0803 23:49:34.198942  346092 fix.go:200] guest clock delta is within tolerance: 82.209659ms
	I0803 23:49:34.198947  346092 start.go:83] releasing machines lock for "ha-349588-m02", held for 25.122576839s
	I0803 23:49:34.198968  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.199261  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:34.202135  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.202517  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.202542  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.204728  346092 out.go:177] * Found network options:
	I0803 23:49:34.206084  346092 out.go:177]   - NO_PROXY=192.168.39.168
	W0803 23:49:34.207413  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:49:34.207457  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208191  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208417  346092 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0803 23:49:34.208530  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:49:34.208577  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	W0803 23:49:34.208720  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:49:34.208822  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:49:34.208847  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0803 23:49:34.211895  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.211921  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212318  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.212350  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:34.212374  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212388  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:34.212549  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.212667  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0803 23:49:34.212745  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.212837  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0803 23:49:34.212868  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.212971  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0803 23:49:34.213014  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.213205  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0803 23:49:34.447882  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:49:34.454498  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:49:34.454573  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:49:34.471269  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:49:34.471295  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:49:34.471359  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:49:34.488153  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:49:34.503703  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:49:34.503780  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:49:34.518917  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:49:34.534021  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:49:34.650977  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:49:34.798998  346092 docker.go:233] disabling docker service ...
	I0803 23:49:34.799082  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:49:34.814209  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:49:34.828385  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:49:34.970942  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:49:35.098562  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:49:35.113143  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:49:35.132472  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:49:35.132547  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.143835  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:49:35.143943  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.155440  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.167348  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.178602  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:49:35.190200  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.201259  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.220070  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:49:35.231425  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:49:35.241477  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:49:35.241554  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:49:35.255830  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:49:35.265769  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:49:35.386227  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:49:35.520835  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:49:35.520906  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:49:35.526284  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:49:35.526385  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:49:35.530619  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:49:35.572868  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:49:35.572976  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:49:35.602425  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:49:35.633811  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:49:35.634957  346092 out.go:177]   - env NO_PROXY=192.168.39.168
	I0803 23:49:35.636018  346092 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0803 23:49:35.638807  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:35.639117  346092 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:49:24 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0803 23:49:35.639150  346092 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0803 23:49:35.639321  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:49:35.643666  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:49:35.656809  346092 mustload.go:65] Loading cluster: ha-349588
	I0803 23:49:35.657051  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:49:35.657322  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:35.657356  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:35.673016  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0803 23:49:35.673600  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:35.674137  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:35.674163  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:35.674524  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:35.674729  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:49:35.676374  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:35.676762  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:35.676801  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:35.692471  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0803 23:49:35.692902  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:35.693448  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:35.693472  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:35.693833  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:35.694035  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:35.694192  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.67
	I0803 23:49:35.694204  346092 certs.go:194] generating shared ca certs ...
	I0803 23:49:35.694223  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.694426  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:49:35.694494  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:49:35.694507  346092 certs.go:256] generating profile certs ...
	I0803 23:49:35.694605  346092 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:49:35.694640  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71
	I0803 23:49:35.694659  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.254]
	I0803 23:49:35.917497  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 ...
	I0803 23:49:35.917544  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71: {Name:mke951f82b9c8987c94f55cf17d3747067a5c196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.917758  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71 ...
	I0803 23:49:35.917774  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71: {Name:mk02505c8ddb9ca87fb327815bc5ef9322277b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:49:35.917868  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.22f6dd71 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:49:35.918007  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.22f6dd71 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:49:35.918138  346092 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:49:35.918156  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:49:35.918172  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:49:35.918186  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:49:35.918202  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:49:35.918215  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:49:35.918227  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:49:35.918239  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:49:35.918251  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:49:35.918299  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:49:35.918329  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:49:35.918338  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:49:35.918360  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:49:35.918384  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:49:35.918404  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:49:35.918441  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:49:35.918466  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:49:35.918480  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:49:35.918490  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:35.918524  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:35.921612  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:35.922060  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:35.922086  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:35.922268  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:35.922528  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:35.922696  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:35.922862  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:35.993968  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:49:35.999189  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:49:36.012984  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:49:36.017711  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0803 23:49:36.030643  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:49:36.035338  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:49:36.046723  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:49:36.051542  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:49:36.063687  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:49:36.068111  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:49:36.079721  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:49:36.084574  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:49:36.097214  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:49:36.123871  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:49:36.150504  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:49:36.176044  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:49:36.201207  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0803 23:49:36.226385  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:49:36.251236  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:49:36.276179  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:49:36.300943  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:49:36.326134  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:49:36.351169  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:49:36.376713  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:49:36.394851  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0803 23:49:36.412839  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:49:36.429967  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:49:36.447330  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:49:36.465224  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:49:36.484948  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:49:36.504399  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:49:36.510609  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:49:36.522136  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.527204  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.527286  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:49:36.533370  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:49:36.545165  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:49:36.557561  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.562446  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.562514  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:49:36.568768  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:49:36.580483  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:49:36.592809  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.598103  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.598176  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:49:36.604409  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:49:36.616058  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:49:36.620743  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:49:36.620797  346092 kubeadm.go:934] updating node {m02 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0803 23:49:36.620884  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:49:36.620910  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:49:36.620954  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:49:36.638507  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:49:36.638607  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:49:36.638675  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:49:36.649343  346092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:49:36.649405  346092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:49:36.661547  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:49:36.661580  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:49:36.661674  346092 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0803 23:49:36.661683  346092 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0803 23:49:36.661689  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:49:36.666174  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:49:36.666213  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:49:37.652314  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:49:37.669824  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:49:37.669973  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:49:37.674869  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:49:37.674913  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:49:39.850597  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:49:39.850682  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:49:39.855926  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:49:39.855963  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:49:40.108212  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:49:40.119630  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:49:40.137422  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:49:40.154245  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:49:40.171440  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:49:40.175867  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:49:40.189078  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:49:40.320235  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:49:40.338074  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:49:40.338438  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:49:40.338480  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:49:40.354264  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0803 23:49:40.354726  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:49:40.355217  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:49:40.355240  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:49:40.355602  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:49:40.355803  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:49:40.355978  346092 start.go:317] joinCluster: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:49:40.356124  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:49:40.356168  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:49:40.359343  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:40.359840  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:49:40.359873  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:49:40.360005  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:49:40.360235  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:49:40.360419  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:49:40.360578  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:49:40.515090  346092 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:49:40.515146  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wojua7.acwoc7lubp2sjzye --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443"
	I0803 23:50:02.946975  346092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wojua7.acwoc7lubp2sjzye --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m02 --control-plane --apiserver-advertise-address=192.168.39.67 --apiserver-bind-port=8443": (22.431790037s)
	I0803 23:50:02.947019  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:50:03.446614  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588-m02 minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=false
	I0803 23:50:03.609385  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-349588-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:50:03.735531  346092 start.go:319] duration metric: took 23.379547866s to joinCluster
	I0803 23:50:03.735696  346092 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:03.735969  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:03.737123  346092 out.go:177] * Verifying Kubernetes components...
	I0803 23:50:03.738194  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:04.021548  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:50:04.087375  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:50:04.087738  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:50:04.087825  346092 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.168:8443
	I0803 23:50:04.088155  346092 node_ready.go:35] waiting up to 6m0s for node "ha-349588-m02" to be "Ready" ...
	I0803 23:50:04.088267  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:04.088279  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:04.088289  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:04.088293  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:04.101608  346092 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0803 23:50:04.588793  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:04.588824  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:04.588835  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:04.588842  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:04.598391  346092 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0803 23:50:05.088603  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:05.088654  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:05.088667  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:05.088672  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:05.101983  346092 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0803 23:50:05.588487  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:05.588512  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:05.588520  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:05.588524  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:05.594895  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:50:06.088958  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:06.088993  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:06.089006  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:06.089013  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:06.093121  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:06.093903  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:06.589007  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:06.589034  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:06.589043  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:06.589049  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:06.592751  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:07.088728  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:07.088755  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:07.088765  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:07.088769  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:07.092603  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:07.588467  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:07.588494  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:07.588503  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:07.588507  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:07.592369  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:08.088806  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:08.088832  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:08.088842  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:08.088847  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:08.341203  346092 round_trippers.go:574] Response Status: 200 OK in 252 milliseconds
	I0803 23:50:08.341873  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:08.588741  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:08.588769  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:08.588784  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:08.588791  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:08.593035  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:09.088693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:09.088718  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:09.088727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:09.088730  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:09.092569  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:09.589160  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:09.589184  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:09.589193  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:09.589196  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:09.593032  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:10.088752  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:10.088778  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:10.088786  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:10.088790  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:10.093525  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:10.588993  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:10.589018  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:10.589026  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:10.589030  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:10.593009  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:10.593688  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:11.089324  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:11.089353  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:11.089364  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:11.089369  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:11.092666  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:11.588626  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:11.588652  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:11.588661  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:11.588665  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:11.592229  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:12.088688  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:12.088717  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:12.088728  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:12.088735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:12.092442  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:12.588482  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:12.588508  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:12.588519  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:12.588525  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:12.592800  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:13.089159  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:13.089183  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:13.089192  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:13.089197  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:13.093008  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:13.093622  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:13.588880  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:13.588905  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:13.588914  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:13.588920  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:13.592486  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:14.088375  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:14.088400  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:14.088409  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:14.088413  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:14.092385  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:14.588835  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:14.588868  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:14.588877  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:14.588881  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:14.592420  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:15.089024  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:15.089047  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:15.089057  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:15.089061  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:15.094639  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:15.095247  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:15.589208  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:15.589235  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:15.589249  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:15.589255  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:15.592801  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:16.089093  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:16.089119  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:16.089127  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:16.089132  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:16.093269  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:16.588406  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:16.588436  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:16.588445  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:16.588448  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:16.592508  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:17.088496  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:17.088524  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:17.088532  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:17.088537  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:17.092475  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:17.588414  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:17.588441  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:17.588450  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:17.588454  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:17.591618  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:17.592274  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:18.088769  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:18.088799  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:18.088810  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:18.088815  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:18.092770  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:18.588849  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:18.588880  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:18.588891  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:18.588896  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:18.596556  346092 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:50:19.089425  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:19.089451  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:19.089460  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:19.089465  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:19.093449  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:19.588574  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:19.588600  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:19.588608  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:19.588611  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:19.592387  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:19.593033  346092 node_ready.go:53] node "ha-349588-m02" has status "Ready":"False"
	I0803 23:50:20.089327  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.089353  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.089365  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.089371  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.092714  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.093273  346092 node_ready.go:49] node "ha-349588-m02" has status "Ready":"True"
	I0803 23:50:20.093308  346092 node_ready.go:38] duration metric: took 16.005118223s for node "ha-349588-m02" to be "Ready" ...
	I0803 23:50:20.093320  346092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:50:20.093433  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:20.093448  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.093462  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.093469  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.100038  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:50:20.105890  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.105983  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fzmtg
	I0803 23:50:20.105993  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.106000  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.106006  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.109203  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.110030  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.110048  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.110059  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.110065  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.113001  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.113519  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.113536  346092 pod_ready.go:81] duration metric: took 7.615549ms for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.113545  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.113609  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8qt6
	I0803 23:50:20.113616  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.113623  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.113630  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.116416  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.117033  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.117048  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.117055  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.117058  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.119946  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.120560  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.120579  346092 pod_ready.go:81] duration metric: took 7.024999ms for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.120591  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.120656  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588
	I0803 23:50:20.120666  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.120676  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.120683  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.122841  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.123473  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.123488  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.123495  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.123500  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.125721  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.126110  346092 pod_ready.go:92] pod "etcd-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.126125  346092 pod_ready.go:81] duration metric: took 5.526947ms for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.126134  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.126181  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m02
	I0803 23:50:20.126188  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.126194  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.126198  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.128291  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.128736  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.128749  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.128756  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.128759  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.130925  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:50:20.131570  346092 pod_ready.go:92] pod "etcd-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.131587  346092 pod_ready.go:81] duration metric: took 5.446889ms for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.131599  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.290056  346092 request.go:629] Waited for 158.368975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:50:20.290126  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:50:20.290132  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.290140  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.290145  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.293571  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.489637  346092 request.go:629] Waited for 195.390522ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.489707  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:20.489712  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.489720  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.489725  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.492984  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.493434  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.493454  346092 pod_ready.go:81] duration metric: took 361.848401ms for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.493467  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.689358  346092 request.go:629] Waited for 195.783846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:50:20.689430  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:50:20.689438  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.689456  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.689465  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.693618  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:20.889724  346092 request.go:629] Waited for 195.310872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.889787  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:20.889792  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:20.889800  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:20.889804  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:20.893536  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:20.894217  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:20.894238  346092 pod_ready.go:81] duration metric: took 400.764562ms for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:20.894248  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.090421  346092 request.go:629] Waited for 196.080925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:50:21.090509  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:50:21.090516  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.090534  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.090543  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.094083  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.290154  346092 request.go:629] Waited for 195.368168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:21.290234  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:21.290238  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.290246  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.290250  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.293525  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.294206  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:21.294224  346092 pod_ready.go:81] duration metric: took 399.970486ms for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.294234  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.490360  346092 request.go:629] Waited for 196.055949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:50:21.490451  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:50:21.490456  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.490465  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.490468  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.494025  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.690145  346092 request.go:629] Waited for 195.384727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:21.690220  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:21.690228  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.690240  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.690248  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.693529  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:21.694169  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:21.694192  346092 pod_ready.go:81] duration metric: took 399.951921ms for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.694202  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:21.890179  346092 request.go:629] Waited for 195.854387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:50:21.890259  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:50:21.890265  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:21.890274  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:21.890279  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:21.893716  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.089910  346092 request.go:629] Waited for 195.371972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.090002  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.090008  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.090016  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.090027  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.093738  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.094479  346092 pod_ready.go:92] pod "kube-proxy-bbzdt" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.094500  346092 pod_ready.go:81] duration metric: took 400.291536ms for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.094509  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.289594  346092 request.go:629] Waited for 195.006002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:50:22.289686  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:50:22.289694  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.289702  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.289707  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.293120  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.490210  346092 request.go:629] Waited for 196.420033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:22.490278  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:22.490283  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.490291  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.490294  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.493680  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.494297  346092 pod_ready.go:92] pod "kube-proxy-gbg5q" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.494322  346092 pod_ready.go:81] duration metric: took 399.806171ms for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.494332  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.689376  346092 request.go:629] Waited for 194.960002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:50:22.689464  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:50:22.689470  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.689478  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.689482  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.693104  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.890297  346092 request.go:629] Waited for 196.361011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.890391  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:50:22.890399  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:22.890411  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:22.890420  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:22.893697  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:22.894373  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:22.894394  346092 pod_ready.go:81] duration metric: took 400.055147ms for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:22.894407  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:23.089430  346092 request.go:629] Waited for 194.917023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:50:23.089499  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:50:23.089515  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.089526  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.089531  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.093012  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.290011  346092 request.go:629] Waited for 196.376685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:23.290074  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:50:23.290079  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.290087  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.290094  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.293439  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.293983  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:50:23.294011  346092 pod_ready.go:81] duration metric: took 399.595842ms for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:50:23.294023  346092 pod_ready.go:38] duration metric: took 3.200674416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:50:23.294047  346092 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:50:23.294103  346092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:50:23.311925  346092 api_server.go:72] duration metric: took 19.57618176s to wait for apiserver process to appear ...
	I0803 23:50:23.311959  346092 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:50:23.311986  346092 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0803 23:50:23.316404  346092 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0803 23:50:23.316479  346092 round_trippers.go:463] GET https://192.168.39.168:8443/version
	I0803 23:50:23.316488  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.316496  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.316500  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.317421  346092 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:50:23.317614  346092 api_server.go:141] control plane version: v1.30.3
	I0803 23:50:23.317642  346092 api_server.go:131] duration metric: took 5.676569ms to wait for apiserver health ...
	I0803 23:50:23.317651  346092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:50:23.489822  346092 request.go:629] Waited for 172.098571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.489889  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.489894  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.489904  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.489909  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.495307  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:23.500631  346092 system_pods.go:59] 17 kube-system pods found
	I0803 23:50:23.500677  346092 system_pods.go:61] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:50:23.500684  346092 system_pods.go:61] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:50:23.500688  346092 system_pods.go:61] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:50:23.500691  346092 system_pods.go:61] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:50:23.500698  346092 system_pods.go:61] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:50:23.500701  346092 system_pods.go:61] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:50:23.500704  346092 system_pods.go:61] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:50:23.500708  346092 system_pods.go:61] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:50:23.500713  346092 system_pods.go:61] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:50:23.500718  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:50:23.500722  346092 system_pods.go:61] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:50:23.500727  346092 system_pods.go:61] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:50:23.500731  346092 system_pods.go:61] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:50:23.500736  346092 system_pods.go:61] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:50:23.500744  346092 system_pods.go:61] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:50:23.500748  346092 system_pods.go:61] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:50:23.500760  346092 system_pods.go:61] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:50:23.500774  346092 system_pods.go:74] duration metric: took 183.114377ms to wait for pod list to return data ...
	I0803 23:50:23.500785  346092 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:50:23.690294  346092 request.go:629] Waited for 189.403835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:50:23.690372  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:50:23.690379  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.690387  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.690390  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.693867  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:50:23.694134  346092 default_sa.go:45] found service account: "default"
	I0803 23:50:23.694155  346092 default_sa.go:55] duration metric: took 193.358105ms for default service account to be created ...
	I0803 23:50:23.694165  346092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:50:23.889572  346092 request.go:629] Waited for 195.298844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.889643  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:50:23.889648  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:23.889656  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:23.889667  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:23.895176  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:50:23.899132  346092 system_pods.go:86] 17 kube-system pods found
	I0803 23:50:23.899162  346092 system_pods.go:89] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:50:23.899168  346092 system_pods.go:89] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:50:23.899173  346092 system_pods.go:89] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:50:23.899177  346092 system_pods.go:89] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:50:23.899180  346092 system_pods.go:89] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:50:23.899184  346092 system_pods.go:89] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:50:23.899188  346092 system_pods.go:89] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:50:23.899191  346092 system_pods.go:89] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:50:23.899196  346092 system_pods.go:89] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:50:23.899199  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:50:23.899203  346092 system_pods.go:89] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:50:23.899206  346092 system_pods.go:89] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:50:23.899210  346092 system_pods.go:89] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:50:23.899214  346092 system_pods.go:89] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:50:23.899218  346092 system_pods.go:89] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:50:23.899221  346092 system_pods.go:89] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:50:23.899224  346092 system_pods.go:89] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:50:23.899232  346092 system_pods.go:126] duration metric: took 205.059563ms to wait for k8s-apps to be running ...
	I0803 23:50:23.899241  346092 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:50:23.899289  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:50:23.918586  346092 system_svc.go:56] duration metric: took 19.330492ms WaitForService to wait for kubelet
	I0803 23:50:23.918619  346092 kubeadm.go:582] duration metric: took 20.182883458s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:50:23.918639  346092 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:50:24.090107  346092 request.go:629] Waited for 171.389393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes
	I0803 23:50:24.090194  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes
	I0803 23:50:24.090202  346092 round_trippers.go:469] Request Headers:
	I0803 23:50:24.090213  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:50:24.090218  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:50:24.094436  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:50:24.095386  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:50:24.095419  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:50:24.095433  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:50:24.095439  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:50:24.095445  346092 node_conditions.go:105] duration metric: took 176.80069ms to run NodePressure ...
	I0803 23:50:24.095460  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:50:24.095497  346092 start.go:255] writing updated cluster config ...
	I0803 23:50:24.097766  346092 out.go:177] 
	I0803 23:50:24.099166  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:24.099285  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:24.101394  346092 out.go:177] * Starting "ha-349588-m03" control-plane node in "ha-349588" cluster
	I0803 23:50:24.102673  346092 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:50:24.102710  346092 cache.go:56] Caching tarball of preloaded images
	I0803 23:50:24.102810  346092 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:50:24.102821  346092 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:50:24.102925  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:24.103193  346092 start.go:360] acquireMachinesLock for ha-349588-m03: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:50:24.103240  346092 start.go:364] duration metric: took 27.2µs to acquireMachinesLock for "ha-349588-m03"
	I0803 23:50:24.103261  346092 start.go:93] Provisioning new machine with config: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:24.103356  346092 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0803 23:50:24.104783  346092 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0803 23:50:24.104893  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:24.104933  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:24.121746  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0803 23:50:24.122292  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:24.122833  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:24.122857  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:24.123219  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:24.123424  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:24.123599  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:24.123792  346092 start.go:159] libmachine.API.Create for "ha-349588" (driver="kvm2")
	I0803 23:50:24.123823  346092 client.go:168] LocalClient.Create starting
	I0803 23:50:24.123860  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0803 23:50:24.123907  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:50:24.123930  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:50:24.124006  346092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0803 23:50:24.124033  346092 main.go:141] libmachine: Decoding PEM data...
	I0803 23:50:24.124049  346092 main.go:141] libmachine: Parsing certificate...
	I0803 23:50:24.124078  346092 main.go:141] libmachine: Running pre-create checks...
	I0803 23:50:24.124089  346092 main.go:141] libmachine: (ha-349588-m03) Calling .PreCreateCheck
	I0803 23:50:24.124263  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:24.124674  346092 main.go:141] libmachine: Creating machine...
	I0803 23:50:24.124688  346092 main.go:141] libmachine: (ha-349588-m03) Calling .Create
	I0803 23:50:24.124837  346092 main.go:141] libmachine: (ha-349588-m03) Creating KVM machine...
	I0803 23:50:24.126236  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found existing default KVM network
	I0803 23:50:24.126409  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found existing private KVM network mk-ha-349588
	I0803 23:50:24.126593  346092 main.go:141] libmachine: (ha-349588-m03) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 ...
	I0803 23:50:24.126610  346092 main.go:141] libmachine: (ha-349588-m03) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:50:24.126756  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.126596  346924 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:50:24.126786  346092 main.go:141] libmachine: (ha-349588-m03) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0803 23:50:24.399033  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.398884  346924 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa...
	I0803 23:50:24.516914  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.516772  346924 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/ha-349588-m03.rawdisk...
	I0803 23:50:24.516953  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Writing magic tar header
	I0803 23:50:24.516982  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Writing SSH key tar header
	I0803 23:50:24.516996  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:24.516895  346924 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 ...
	I0803 23:50:24.517016  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03
	I0803 23:50:24.517113  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0803 23:50:24.517143  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03 (perms=drwx------)
	I0803 23:50:24.517162  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:50:24.517179  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0803 23:50:24.517197  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0803 23:50:24.517211  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0803 23:50:24.517226  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0803 23:50:24.517243  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0803 23:50:24.517254  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home/jenkins
	I0803 23:50:24.517267  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Checking permissions on dir: /home
	I0803 23:50:24.517278  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Skipping /home - not owner
	I0803 23:50:24.517290  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0803 23:50:24.517307  346092 main.go:141] libmachine: (ha-349588-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0803 23:50:24.517318  346092 main.go:141] libmachine: (ha-349588-m03) Creating domain...
	I0803 23:50:24.518387  346092 main.go:141] libmachine: (ha-349588-m03) define libvirt domain using xml: 
	I0803 23:50:24.518416  346092 main.go:141] libmachine: (ha-349588-m03) <domain type='kvm'>
	I0803 23:50:24.518427  346092 main.go:141] libmachine: (ha-349588-m03)   <name>ha-349588-m03</name>
	I0803 23:50:24.518438  346092 main.go:141] libmachine: (ha-349588-m03)   <memory unit='MiB'>2200</memory>
	I0803 23:50:24.518445  346092 main.go:141] libmachine: (ha-349588-m03)   <vcpu>2</vcpu>
	I0803 23:50:24.518453  346092 main.go:141] libmachine: (ha-349588-m03)   <features>
	I0803 23:50:24.518464  346092 main.go:141] libmachine: (ha-349588-m03)     <acpi/>
	I0803 23:50:24.518474  346092 main.go:141] libmachine: (ha-349588-m03)     <apic/>
	I0803 23:50:24.518485  346092 main.go:141] libmachine: (ha-349588-m03)     <pae/>
	I0803 23:50:24.518498  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.518509  346092 main.go:141] libmachine: (ha-349588-m03)   </features>
	I0803 23:50:24.518523  346092 main.go:141] libmachine: (ha-349588-m03)   <cpu mode='host-passthrough'>
	I0803 23:50:24.518533  346092 main.go:141] libmachine: (ha-349588-m03)   
	I0803 23:50:24.518543  346092 main.go:141] libmachine: (ha-349588-m03)   </cpu>
	I0803 23:50:24.518553  346092 main.go:141] libmachine: (ha-349588-m03)   <os>
	I0803 23:50:24.518563  346092 main.go:141] libmachine: (ha-349588-m03)     <type>hvm</type>
	I0803 23:50:24.518575  346092 main.go:141] libmachine: (ha-349588-m03)     <boot dev='cdrom'/>
	I0803 23:50:24.518584  346092 main.go:141] libmachine: (ha-349588-m03)     <boot dev='hd'/>
	I0803 23:50:24.518594  346092 main.go:141] libmachine: (ha-349588-m03)     <bootmenu enable='no'/>
	I0803 23:50:24.518607  346092 main.go:141] libmachine: (ha-349588-m03)   </os>
	I0803 23:50:24.518618  346092 main.go:141] libmachine: (ha-349588-m03)   <devices>
	I0803 23:50:24.518629  346092 main.go:141] libmachine: (ha-349588-m03)     <disk type='file' device='cdrom'>
	I0803 23:50:24.518647  346092 main.go:141] libmachine: (ha-349588-m03)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/boot2docker.iso'/>
	I0803 23:50:24.518657  346092 main.go:141] libmachine: (ha-349588-m03)       <target dev='hdc' bus='scsi'/>
	I0803 23:50:24.518670  346092 main.go:141] libmachine: (ha-349588-m03)       <readonly/>
	I0803 23:50:24.518683  346092 main.go:141] libmachine: (ha-349588-m03)     </disk>
	I0803 23:50:24.518726  346092 main.go:141] libmachine: (ha-349588-m03)     <disk type='file' device='disk'>
	I0803 23:50:24.518753  346092 main.go:141] libmachine: (ha-349588-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0803 23:50:24.518781  346092 main.go:141] libmachine: (ha-349588-m03)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/ha-349588-m03.rawdisk'/>
	I0803 23:50:24.518793  346092 main.go:141] libmachine: (ha-349588-m03)       <target dev='hda' bus='virtio'/>
	I0803 23:50:24.518802  346092 main.go:141] libmachine: (ha-349588-m03)     </disk>
	I0803 23:50:24.518813  346092 main.go:141] libmachine: (ha-349588-m03)     <interface type='network'>
	I0803 23:50:24.518823  346092 main.go:141] libmachine: (ha-349588-m03)       <source network='mk-ha-349588'/>
	I0803 23:50:24.518836  346092 main.go:141] libmachine: (ha-349588-m03)       <model type='virtio'/>
	I0803 23:50:24.518878  346092 main.go:141] libmachine: (ha-349588-m03)     </interface>
	I0803 23:50:24.518907  346092 main.go:141] libmachine: (ha-349588-m03)     <interface type='network'>
	I0803 23:50:24.518942  346092 main.go:141] libmachine: (ha-349588-m03)       <source network='default'/>
	I0803 23:50:24.518960  346092 main.go:141] libmachine: (ha-349588-m03)       <model type='virtio'/>
	I0803 23:50:24.518970  346092 main.go:141] libmachine: (ha-349588-m03)     </interface>
	I0803 23:50:24.518978  346092 main.go:141] libmachine: (ha-349588-m03)     <serial type='pty'>
	I0803 23:50:24.518990  346092 main.go:141] libmachine: (ha-349588-m03)       <target port='0'/>
	I0803 23:50:24.519000  346092 main.go:141] libmachine: (ha-349588-m03)     </serial>
	I0803 23:50:24.519008  346092 main.go:141] libmachine: (ha-349588-m03)     <console type='pty'>
	I0803 23:50:24.519020  346092 main.go:141] libmachine: (ha-349588-m03)       <target type='serial' port='0'/>
	I0803 23:50:24.519029  346092 main.go:141] libmachine: (ha-349588-m03)     </console>
	I0803 23:50:24.519045  346092 main.go:141] libmachine: (ha-349588-m03)     <rng model='virtio'>
	I0803 23:50:24.519059  346092 main.go:141] libmachine: (ha-349588-m03)       <backend model='random'>/dev/random</backend>
	I0803 23:50:24.519065  346092 main.go:141] libmachine: (ha-349588-m03)     </rng>
	I0803 23:50:24.519077  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.519084  346092 main.go:141] libmachine: (ha-349588-m03)     
	I0803 23:50:24.519098  346092 main.go:141] libmachine: (ha-349588-m03)   </devices>
	I0803 23:50:24.519106  346092 main.go:141] libmachine: (ha-349588-m03) </domain>
	I0803 23:50:24.519124  346092 main.go:141] libmachine: (ha-349588-m03) 
	I0803 23:50:24.526713  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:ab:a3:ea in network default
	I0803 23:50:24.527228  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring networks are active...
	I0803 23:50:24.527253  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:24.527861  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring network default is active
	I0803 23:50:24.528200  346092 main.go:141] libmachine: (ha-349588-m03) Ensuring network mk-ha-349588 is active
	I0803 23:50:24.528499  346092 main.go:141] libmachine: (ha-349588-m03) Getting domain xml...
	I0803 23:50:24.529299  346092 main.go:141] libmachine: (ha-349588-m03) Creating domain...
	I0803 23:50:25.809639  346092 main.go:141] libmachine: (ha-349588-m03) Waiting to get IP...
	I0803 23:50:25.810693  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:25.811149  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:25.811200  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:25.811127  346924 retry.go:31] will retry after 239.766839ms: waiting for machine to come up
	I0803 23:50:26.052890  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.053455  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.053526  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.053417  346924 retry.go:31] will retry after 350.096869ms: waiting for machine to come up
	I0803 23:50:26.404999  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.405425  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.405450  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.405378  346924 retry.go:31] will retry after 426.316752ms: waiting for machine to come up
	I0803 23:50:26.832924  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:26.833346  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:26.833377  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:26.833286  346924 retry.go:31] will retry after 468.911288ms: waiting for machine to come up
	I0803 23:50:27.303717  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:27.304186  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:27.304209  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:27.304153  346924 retry.go:31] will retry after 588.198491ms: waiting for machine to come up
	I0803 23:50:27.893918  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:27.894345  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:27.894376  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:27.894289  346924 retry.go:31] will retry after 756.527198ms: waiting for machine to come up
	I0803 23:50:28.652222  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:28.652692  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:28.652722  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:28.652635  346924 retry.go:31] will retry after 956.618375ms: waiting for machine to come up
	I0803 23:50:29.610577  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:29.611053  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:29.611081  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:29.611003  346924 retry.go:31] will retry after 894.193355ms: waiting for machine to come up
	I0803 23:50:30.506910  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:30.507443  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:30.507475  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:30.507383  346924 retry.go:31] will retry after 1.475070752s: waiting for machine to come up
	I0803 23:50:31.984363  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:31.984792  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:31.984823  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:31.984738  346924 retry.go:31] will retry after 1.96830202s: waiting for machine to come up
	I0803 23:50:33.954805  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:33.955250  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:33.955283  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:33.955190  346924 retry.go:31] will retry after 2.345601343s: waiting for machine to come up
	I0803 23:50:36.302961  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:36.303447  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:36.303478  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:36.303397  346924 retry.go:31] will retry after 2.267010238s: waiting for machine to come up
	I0803 23:50:38.571635  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:38.572141  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:38.572165  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:38.572088  346924 retry.go:31] will retry after 4.429291681s: waiting for machine to come up
	I0803 23:50:43.003156  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:43.003613  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find current IP address of domain ha-349588-m03 in network mk-ha-349588
	I0803 23:50:43.003638  346092 main.go:141] libmachine: (ha-349588-m03) DBG | I0803 23:50:43.003558  346924 retry.go:31] will retry after 3.48372957s: waiting for machine to come up
	I0803 23:50:46.490110  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.490603  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has current primary IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.490633  346092 main.go:141] libmachine: (ha-349588-m03) Found IP for machine: 192.168.39.79
	I0803 23:50:46.490655  346092 main.go:141] libmachine: (ha-349588-m03) Reserving static IP address...
	I0803 23:50:46.491072  346092 main.go:141] libmachine: (ha-349588-m03) DBG | unable to find host DHCP lease matching {name: "ha-349588-m03", mac: "52:54:00:1d:c9:03", ip: "192.168.39.79"} in network mk-ha-349588
	I0803 23:50:46.573000  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Getting to WaitForSSH function...
	I0803 23:50:46.573036  346092 main.go:141] libmachine: (ha-349588-m03) Reserved static IP address: 192.168.39.79
	I0803 23:50:46.573049  346092 main.go:141] libmachine: (ha-349588-m03) Waiting for SSH to be available...
	I0803 23:50:46.575539  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.575870  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.575901  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.576123  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using SSH client type: external
	I0803 23:50:46.576160  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa (-rw-------)
	I0803 23:50:46.576188  346092 main.go:141] libmachine: (ha-349588-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0803 23:50:46.576201  346092 main.go:141] libmachine: (ha-349588-m03) DBG | About to run SSH command:
	I0803 23:50:46.576213  346092 main.go:141] libmachine: (ha-349588-m03) DBG | exit 0
	I0803 23:50:46.710090  346092 main.go:141] libmachine: (ha-349588-m03) DBG | SSH cmd err, output: <nil>: 
	I0803 23:50:46.710376  346092 main.go:141] libmachine: (ha-349588-m03) KVM machine creation complete!
	I0803 23:50:46.710702  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:46.711288  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:46.711523  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:46.711699  346092 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0803 23:50:46.711715  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:50:46.713405  346092 main.go:141] libmachine: Detecting operating system of created instance...
	I0803 23:50:46.713420  346092 main.go:141] libmachine: Waiting for SSH to be available...
	I0803 23:50:46.713426  346092 main.go:141] libmachine: Getting to WaitForSSH function...
	I0803 23:50:46.713432  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.715823  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.716240  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.716262  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.716392  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.716587  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.716764  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.716943  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.717168  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.717414  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.717426  346092 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0803 23:50:46.833008  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:50:46.833043  346092 main.go:141] libmachine: Detecting the provisioner...
	I0803 23:50:46.833055  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.836050  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.836542  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.836581  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.836685  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.836896  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.837102  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.837277  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.837427  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.837659  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.837674  346092 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0803 23:50:46.954626  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0803 23:50:46.954732  346092 main.go:141] libmachine: found compatible host: buildroot
	I0803 23:50:46.954746  346092 main.go:141] libmachine: Provisioning with buildroot...
	I0803 23:50:46.954761  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:46.955024  346092 buildroot.go:166] provisioning hostname "ha-349588-m03"
	I0803 23:50:46.955054  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:46.955260  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:46.958280  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.958653  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:46.958677  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:46.958827  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:46.959018  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.959199  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:46.959356  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:46.959528  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:46.959713  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:46.959727  346092 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588-m03 && echo "ha-349588-m03" | sudo tee /etc/hostname
	I0803 23:50:47.091774  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588-m03
	
	I0803 23:50:47.091816  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.095084  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.095475  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.095509  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.095705  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.095912  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.096140  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.096327  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.096531  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:47.096732  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:47.096764  346092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:50:47.224450  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:50:47.224503  346092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:50:47.224536  346092 buildroot.go:174] setting up certificates
	I0803 23:50:47.224547  346092 provision.go:84] configureAuth start
	I0803 23:50:47.224561  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetMachineName
	I0803 23:50:47.224940  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:47.228138  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.228514  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.228544  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.228711  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.231105  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.231425  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.231449  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.231559  346092 provision.go:143] copyHostCerts
	I0803 23:50:47.231597  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:50:47.231642  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:50:47.231680  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:50:47.231784  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:50:47.231887  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:50:47.231914  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:50:47.231924  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:50:47.231961  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:50:47.232050  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:50:47.232071  346092 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:50:47.232075  346092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:50:47.232099  346092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:50:47.232148  346092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588-m03 san=[127.0.0.1 192.168.39.79 ha-349588-m03 localhost minikube]
	I0803 23:50:47.399469  346092 provision.go:177] copyRemoteCerts
	I0803 23:50:47.399534  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:50:47.399562  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.402686  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.403211  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.403235  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.403420  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.403606  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.403793  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.403925  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:47.492467  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:50:47.492566  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 23:50:47.517280  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:50:47.517354  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:50:47.542649  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:50:47.542733  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:50:47.568029  346092 provision.go:87] duration metric: took 343.464982ms to configureAuth
	I0803 23:50:47.568066  346092 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:50:47.568348  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:47.568459  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.571724  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.572147  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.572177  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.572434  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.572661  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.572844  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.573018  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.573266  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:47.573499  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:47.573552  346092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:50:47.853244  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:50:47.853276  346092 main.go:141] libmachine: Checking connection to Docker...
	I0803 23:50:47.853285  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetURL
	I0803 23:50:47.854683  346092 main.go:141] libmachine: (ha-349588-m03) DBG | Using libvirt version 6000000
	I0803 23:50:47.856880  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.857234  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.857272  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.857396  346092 main.go:141] libmachine: Docker is up and running!
	I0803 23:50:47.857411  346092 main.go:141] libmachine: Reticulating splines...
	I0803 23:50:47.857419  346092 client.go:171] duration metric: took 23.733587583s to LocalClient.Create
	I0803 23:50:47.857445  346092 start.go:167] duration metric: took 23.733655538s to libmachine.API.Create "ha-349588"
	I0803 23:50:47.857455  346092 start.go:293] postStartSetup for "ha-349588-m03" (driver="kvm2")
	I0803 23:50:47.857465  346092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:50:47.857481  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:47.857750  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:50:47.857787  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.859967  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.860290  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.860314  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.860473  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.860661  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.860856  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:47.861033  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:47.950131  346092 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:50:47.954819  346092 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:50:47.954849  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:50:47.954920  346092 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:50:47.955013  346092 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:50:47.955026  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:50:47.955136  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:50:47.965629  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:50:47.991355  346092 start.go:296] duration metric: took 133.884824ms for postStartSetup
	I0803 23:50:47.991428  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetConfigRaw
	I0803 23:50:47.992144  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:47.995389  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.995867  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.995892  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.996186  346092 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:50:47.996383  346092 start.go:128] duration metric: took 23.893015539s to createHost
	I0803 23:50:47.996409  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:47.998754  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.999113  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:47.999143  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:47.999287  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:47.999474  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.999669  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:47.999821  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.000025  346092 main.go:141] libmachine: Using SSH client type: native
	I0803 23:50:48.000233  346092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I0803 23:50:48.000247  346092 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:50:48.118642  346092 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729048.091068532
	
	I0803 23:50:48.118694  346092 fix.go:216] guest clock: 1722729048.091068532
	I0803 23:50:48.118704  346092 fix.go:229] Guest: 2024-08-03 23:50:48.091068532 +0000 UTC Remote: 2024-08-03 23:50:47.996396829 +0000 UTC m=+158.613815502 (delta=94.671703ms)
	I0803 23:50:48.118730  346092 fix.go:200] guest clock delta is within tolerance: 94.671703ms
	I0803 23:50:48.118739  346092 start.go:83] releasing machines lock for "ha-349588-m03", held for 24.015487886s
	I0803 23:50:48.118770  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.119061  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:48.121626  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.121930  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.121964  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.124323  346092 out.go:177] * Found network options:
	I0803 23:50:48.126077  346092 out.go:177]   - NO_PROXY=192.168.39.168,192.168.39.67
	W0803 23:50:48.127478  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:50:48.127501  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:50:48.127518  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128153  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128346  346092 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:50:48.128449  346092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:50:48.128485  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	W0803 23:50:48.128555  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	W0803 23:50:48.128576  346092 proxy.go:119] fail to check proxy env: Error ip not in block
	I0803 23:50:48.128633  346092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:50:48.128650  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:50:48.131323  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131347  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131817  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.131848  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.131891  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:48.131906  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:48.132081  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:48.132094  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:50:48.132320  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:48.132324  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:50:48.132523  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.132533  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:50:48.132701  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:48.132773  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:50:48.389379  346092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:50:48.395859  346092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:50:48.395928  346092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:50:48.415300  346092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0803 23:50:48.415326  346092 start.go:495] detecting cgroup driver to use...
	I0803 23:50:48.415389  346092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:50:48.434790  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:50:48.449942  346092 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:50:48.450002  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:50:48.464339  346092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:50:48.479343  346092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:50:48.598044  346092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:50:48.771836  346092 docker.go:233] disabling docker service ...
	I0803 23:50:48.771936  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:50:48.786743  346092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:50:48.800909  346092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:50:48.929721  346092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:50:49.070946  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:50:49.085981  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:50:49.107145  346092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:50:49.107204  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.118494  346092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:50:49.118562  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.129818  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.141337  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.152936  346092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:50:49.165557  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.176476  346092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.195609  346092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:50:49.206645  346092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:50:49.216707  346092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0803 23:50:49.216779  346092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0803 23:50:49.229560  346092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:50:49.240199  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:49.363339  346092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:50:49.509934  346092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:50:49.510026  346092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:50:49.515470  346092 start.go:563] Will wait 60s for crictl version
	I0803 23:50:49.515551  346092 ssh_runner.go:195] Run: which crictl
	I0803 23:50:49.519688  346092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:50:49.558552  346092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:50:49.558653  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:50:49.588140  346092 ssh_runner.go:195] Run: crio --version
	I0803 23:50:49.618274  346092 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:50:49.619575  346092 out.go:177]   - env NO_PROXY=192.168.39.168
	I0803 23:50:49.620837  346092 out.go:177]   - env NO_PROXY=192.168.39.168,192.168.39.67
	I0803 23:50:49.622108  346092 main.go:141] libmachine: (ha-349588-m03) Calling .GetIP
	I0803 23:50:49.624763  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:49.625127  346092 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:50:49.625156  346092 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:50:49.625361  346092 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:50:49.629549  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:50:49.642294  346092 mustload.go:65] Loading cluster: ha-349588
	I0803 23:50:49.642557  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:50:49.642856  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:49.642907  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:49.661314  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0803 23:50:49.661775  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:49.662267  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:49.662289  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:49.662672  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:49.662927  346092 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:50:49.664647  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:50:49.665078  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:49.665123  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:49.681650  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37001
	I0803 23:50:49.682116  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:49.682716  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:49.682741  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:49.683105  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:49.683339  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:50:49.683495  346092 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.79
	I0803 23:50:49.683508  346092 certs.go:194] generating shared ca certs ...
	I0803 23:50:49.683525  346092 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.683695  346092 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:50:49.683752  346092 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:50:49.683765  346092 certs.go:256] generating profile certs ...
	I0803 23:50:49.683876  346092 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:50:49.683910  346092 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80
	I0803 23:50:49.683937  346092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.79 192.168.39.254]
	I0803 23:50:49.893374  346092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 ...
	I0803 23:50:49.893411  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80: {Name:mkdc2fe11503b9f1d1c4c6c90e0b1df90eefa7ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.893608  346092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80 ...
	I0803 23:50:49.893627  346092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80: {Name:mk4257b808aff31998eea42cc17d84d4d90cd6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:50:49.893730  346092 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.24a7ca80 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:50:49.893899  346092 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.24a7ca80 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:50:49.894070  346092 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:50:49.894092  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:50:49.894112  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:50:49.894132  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:50:49.894149  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:50:49.894168  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:50:49.894188  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:50:49.894206  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:50:49.894225  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:50:49.894291  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:50:49.894333  346092 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:50:49.894348  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:50:49.894383  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:50:49.894416  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:50:49.894447  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:50:49.894501  346092 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:50:49.894539  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:50:49.894563  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:49.894581  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:50:49.894629  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:50:49.897587  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:49.897949  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:50:49.897980  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:49.898200  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:50:49.898435  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:50:49.898608  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:50:49.898763  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:50:49.969917  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0803 23:50:49.975416  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0803 23:50:49.988587  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0803 23:50:49.995102  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0803 23:50:50.010263  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0803 23:50:50.015483  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0803 23:50:50.027162  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0803 23:50:50.031962  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0803 23:50:50.043075  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0803 23:50:50.047433  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0803 23:50:50.061685  346092 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0803 23:50:50.066714  346092 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0803 23:50:50.078785  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:50:50.107115  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:50:50.132767  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:50:50.158356  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:50:50.183481  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0803 23:50:50.208890  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:50:50.233259  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:50:50.258319  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:50:50.283420  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:50:50.308734  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:50:50.332877  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:50:50.358589  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0803 23:50:50.378002  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0803 23:50:50.397027  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0803 23:50:50.415515  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0803 23:50:50.432653  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0803 23:50:50.451386  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0803 23:50:50.469186  346092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0803 23:50:50.487462  346092 ssh_runner.go:195] Run: openssl version
	I0803 23:50:50.494163  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:50:50.506441  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.511410  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.511508  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:50:50.518230  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:50:50.529617  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:50:50.541105  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.545860  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.545931  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:50:50.551998  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:50:50.563960  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:50:50.575845  346092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.580600  346092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.580681  346092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:50:50.586680  346092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:50:50.598021  346092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:50:50.602251  346092 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 23:50:50.602313  346092 kubeadm.go:934] updating node {m03 192.168.39.79 8443 v1.30.3 crio true true} ...
	I0803 23:50:50.602404  346092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:50:50.602429  346092 kube-vip.go:115] generating kube-vip config ...
	I0803 23:50:50.602467  346092 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:50:50.619648  346092 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:50:50.619721  346092 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:50:50.619777  346092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:50:50.630085  346092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0803 23:50:50.630144  346092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0803 23:50:50.640083  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0803 23:50:50.640126  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:50:50.640138  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0803 23:50:50.640144  346092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0803 23:50:50.640152  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:50:50.640196  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:50:50.640219  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0803 23:50:50.640219  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0803 23:50:50.658556  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0803 23:50:50.658604  346092 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:50:50.658606  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0803 23:50:50.658650  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0803 23:50:50.658680  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0803 23:50:50.658723  346092 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0803 23:50:50.690959  346092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0803 23:50:50.691012  346092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0803 23:50:51.642253  346092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0803 23:50:51.652555  346092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0803 23:50:51.670949  346092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:50:51.689089  346092 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:50:51.706834  346092 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:50:51.711106  346092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:50:51.724182  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:50:51.847681  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:50:51.869416  346092 host.go:66] Checking if "ha-349588" exists ...
	I0803 23:50:51.869884  346092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:50:51.869941  346092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:50:51.886556  346092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0803 23:50:51.888144  346092 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:50:51.888782  346092 main.go:141] libmachine: Using API Version  1
	I0803 23:50:51.888815  346092 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:50:51.889193  346092 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:50:51.889432  346092 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:50:51.889615  346092 start.go:317] joinCluster: &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:50:51.889756  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0803 23:50:51.889775  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:50:51.893005  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:51.893469  346092 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:50:51.893519  346092 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:50:51.893703  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:50:51.893926  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:50:51.894096  346092 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:50:51.894277  346092 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:50:52.068131  346092 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:50:52.068197  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jhmyct.fs9mmu6drhozseqf --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m03 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443"
	I0803 23:51:16.339682  346092 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jhmyct.fs9mmu6drhozseqf --discovery-token-ca-cert-hash sha256:11781423ede4edadd0063d77ce291f4baa18e593b8841b475f33d9aa1697c33c --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-349588-m03 --control-plane --apiserver-advertise-address=192.168.39.79 --apiserver-bind-port=8443": (24.271445189s)
	I0803 23:51:16.339733  346092 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0803 23:51:17.004233  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-349588-m03 minikube.k8s.io/updated_at=2024_08_03T23_51_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=ha-349588 minikube.k8s.io/primary=false
	I0803 23:51:17.133330  346092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-349588-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0803 23:51:17.258692  346092 start.go:319] duration metric: took 25.369072533s to joinCluster
	I0803 23:51:17.258795  346092 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0803 23:51:17.259136  346092 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:51:17.260411  346092 out.go:177] * Verifying Kubernetes components...
	I0803 23:51:17.261728  346092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:51:17.568914  346092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:51:17.612603  346092 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:51:17.613015  346092 kapi.go:59] client config for ha-349588: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0803 23:51:17.613118  346092 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.168:8443
	I0803 23:51:17.613361  346092 node_ready.go:35] waiting up to 6m0s for node "ha-349588-m03" to be "Ready" ...
	I0803 23:51:17.613453  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:17.613464  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:17.613472  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:17.613477  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:17.616902  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:18.113863  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:18.113893  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:18.113905  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:18.113910  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:18.117470  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:18.613693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:18.613717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:18.613727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:18.613735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:18.617494  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:19.114494  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:19.114519  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:19.114528  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:19.114533  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:19.118484  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:19.614244  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:19.614267  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:19.614278  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:19.614289  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:19.618392  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:19.619001  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:20.114427  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:20.114450  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:20.114458  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:20.114463  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:20.118888  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:20.614464  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:20.614496  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:20.614509  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:20.614515  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:20.620459  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:51:21.114661  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:21.114690  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:21.114701  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:21.114706  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:21.119029  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:21.613753  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:21.613779  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:21.613788  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:21.613794  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:21.617083  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:22.114520  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:22.114548  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:22.114559  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:22.114564  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:22.117991  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:22.118703  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:22.614188  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:22.614211  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:22.614220  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:22.614223  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:22.617880  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:23.113699  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:23.113732  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:23.113741  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:23.113747  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:23.117692  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:23.613659  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:23.613686  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:23.613695  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:23.613698  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:23.617919  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:24.113692  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:24.113722  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:24.113731  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:24.113735  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:24.117496  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:24.613911  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:24.613936  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:24.613945  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:24.613951  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:24.617695  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:24.618387  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:25.113589  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:25.113615  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:25.113622  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:25.113625  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:25.117040  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:25.614494  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:25.614518  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:25.614527  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:25.614530  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:25.618351  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:26.114570  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:26.114600  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:26.114613  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:26.114617  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:26.117984  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:26.613722  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:26.613752  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:26.613762  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:26.613765  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:26.617091  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:27.113607  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:27.113636  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:27.113646  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:27.113651  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:27.116932  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:27.117575  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:27.613684  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:27.613707  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:27.613716  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:27.613719  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:27.617010  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:28.114036  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:28.114061  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:28.114072  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:28.114077  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:28.117689  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:28.613694  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:28.613717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:28.613727  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:28.613731  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:28.617175  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:29.114486  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:29.114516  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:29.114528  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:29.114534  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:29.117960  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:29.118584  346092 node_ready.go:53] node "ha-349588-m03" has status "Ready":"False"
	I0803 23:51:29.614581  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:29.614606  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:29.614615  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:29.614619  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:29.618157  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:30.113701  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:30.113728  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:30.113738  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:30.113745  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:30.118868  346092 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0803 23:51:30.614474  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:30.614503  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:30.614516  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:30.614522  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:30.618005  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.113747  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.113773  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.113784  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.113789  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.117249  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.117818  346092 node_ready.go:49] node "ha-349588-m03" has status "Ready":"True"
	I0803 23:51:31.117844  346092 node_ready.go:38] duration metric: took 13.504465294s for node "ha-349588-m03" to be "Ready" ...
	I0803 23:51:31.117857  346092 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:51:31.117936  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:31.117948  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.117957  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.117963  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.125096  346092 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0803 23:51:31.132659  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.132757  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fzmtg
	I0803 23:51:31.132765  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.132773  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.132777  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.136446  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.137409  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.137425  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.137433  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.137437  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.140711  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.141628  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.141652  346092 pod_ready.go:81] duration metric: took 8.959263ms for pod "coredns-7db6d8ff4d-fzmtg" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.141664  346092 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.141746  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-z8qt6
	I0803 23:51:31.141756  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.141766  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.141774  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.144612  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.145703  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.145717  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.145724  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.145729  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.148882  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.149402  346092 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.149422  346092 pod_ready.go:81] duration metric: took 7.748921ms for pod "coredns-7db6d8ff4d-z8qt6" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.149433  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.149524  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588
	I0803 23:51:31.149537  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.149547  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.149554  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.151974  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.152558  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.152572  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.152579  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.152583  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.154985  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.155502  346092 pod_ready.go:92] pod "etcd-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.155526  346092 pod_ready.go:81] duration metric: took 6.085151ms for pod "etcd-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.155537  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.155596  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m02
	I0803 23:51:31.155603  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.155610  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.155613  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.158896  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.159772  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:31.159786  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.159793  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.159797  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.162550  346092 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0803 23:51:31.163470  346092 pod_ready.go:92] pod "etcd-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.163488  346092 pod_ready.go:81] duration metric: took 7.945539ms for pod "etcd-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.163497  346092 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.313805  346092 request.go:629] Waited for 150.235244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m03
	I0803 23:51:31.313887  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/etcd-ha-349588-m03
	I0803 23:51:31.313894  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.313903  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.313910  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.316950  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.514235  346092 request.go:629] Waited for 196.41936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.514342  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:31.514350  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.514360  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.514370  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.517499  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.518373  346092 pod_ready.go:92] pod "etcd-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.518391  346092 pod_ready.go:81] duration metric: took 354.888561ms for pod "etcd-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.518408  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.714574  346092 request.go:629] Waited for 196.078655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:51:31.714640  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588
	I0803 23:51:31.714645  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.714654  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.714660  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.718192  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:31.914516  346092 request.go:629] Waited for 195.494317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.914594  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:31.914602  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:31.914614  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:31.914624  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:31.920699  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:31.922297  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:31.922322  346092 pod_ready.go:81] duration metric: took 403.9068ms for pod "kube-apiserver-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:31.922337  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.114309  346092 request.go:629] Waited for 191.882286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:51:32.114410  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m02
	I0803 23:51:32.114422  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.114436  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.114446  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.118362  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.314850  346092 request.go:629] Waited for 195.414465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:32.314943  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:32.314954  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.314968  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.314978  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.319424  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:32.319937  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:32.319956  346092 pod_ready.go:81] duration metric: took 397.612453ms for pod "kube-apiserver-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.319968  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.514132  346092 request.go:629] Waited for 194.066274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m03
	I0803 23:51:32.514207  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-349588-m03
	I0803 23:51:32.514218  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.514230  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.514239  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.517826  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.714186  346092 request.go:629] Waited for 195.384867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:32.714263  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:32.714268  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.714276  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.714280  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.717622  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:32.718276  346092 pod_ready.go:92] pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:32.718295  346092 pod_ready.go:81] duration metric: took 398.320232ms for pod "kube-apiserver-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.718305  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:32.914423  346092 request.go:629] Waited for 196.027987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:51:32.914519  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588
	I0803 23:51:32.914531  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:32.914544  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:32.914557  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:32.918214  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.114290  346092 request.go:629] Waited for 195.385789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:33.114354  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:33.114359  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.114367  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.114372  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.118031  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.118758  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.118786  346092 pod_ready.go:81] duration metric: took 400.47234ms for pod "kube-controller-manager-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.118801  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.313773  346092 request.go:629] Waited for 194.874757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:51:33.313869  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m02
	I0803 23:51:33.313886  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.313897  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.313904  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.322352  346092 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0803 23:51:33.514604  346092 request.go:629] Waited for 191.39455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:33.514693  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:33.514701  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.514733  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.514761  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.518436  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.519029  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.519057  346092 pod_ready.go:81] duration metric: took 400.246953ms for pod "kube-controller-manager-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.519070  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.714097  346092 request.go:629] Waited for 194.942392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m03
	I0803 23:51:33.714177  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-349588-m03
	I0803 23:51:33.714183  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.714191  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.714198  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.718005  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.914140  346092 request.go:629] Waited for 195.367976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:33.914237  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:33.914248  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:33.914260  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:33.914268  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:33.918105  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:33.918773  346092 pod_ready.go:92] pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:33.918794  346092 pod_ready.go:81] duration metric: took 399.718883ms for pod "kube-controller-manager-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:33.918804  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.113900  346092 request.go:629] Waited for 194.98485ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:51:34.113982  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bbzdt
	I0803 23:51:34.113991  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.114001  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.114010  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.117261  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.313842  346092 request.go:629] Waited for 195.884146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:34.313923  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:34.313928  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.313936  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.313941  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.318055  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:34.318690  346092 pod_ready.go:92] pod "kube-proxy-bbzdt" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:34.318718  346092 pod_ready.go:81] duration metric: took 399.906769ms for pod "kube-proxy-bbzdt" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.318733  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.514717  346092 request.go:629] Waited for 195.884216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:51:34.514827  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gbg5q
	I0803 23:51:34.514837  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.514846  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.514857  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.518454  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.713786  346092 request.go:629] Waited for 194.249312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:34.713853  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:34.713858  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.713867  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.713872  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.717311  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:34.718121  346092 pod_ready.go:92] pod "kube-proxy-gbg5q" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:34.718146  346092 pod_ready.go:81] duration metric: took 399.405642ms for pod "kube-proxy-gbg5q" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.718156  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gxhmd" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:34.914190  346092 request.go:629] Waited for 195.951933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gxhmd
	I0803 23:51:34.914334  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gxhmd
	I0803 23:51:34.914349  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:34.914359  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:34.914368  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:34.918014  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.114246  346092 request.go:629] Waited for 195.393665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:35.114346  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:35.114351  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.114360  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.114364  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.120400  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:35.120927  346092 pod_ready.go:92] pod "kube-proxy-gxhmd" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.120947  346092 pod_ready.go:81] duration metric: took 402.784938ms for pod "kube-proxy-gxhmd" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.120957  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.314134  346092 request.go:629] Waited for 193.077756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:51:35.314197  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588
	I0803 23:51:35.314204  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.314212  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.314216  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.317495  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.514753  346092 request.go:629] Waited for 196.397541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:35.514819  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588
	I0803 23:51:35.514824  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.514832  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.514837  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.518678  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.519382  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.519403  346092 pod_ready.go:81] duration metric: took 398.440069ms for pod "kube-scheduler-ha-349588" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.519413  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.714054  346092 request.go:629] Waited for 194.546982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:51:35.714123  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m02
	I0803 23:51:35.714131  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.714139  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.714143  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.717555  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.914758  346092 request.go:629] Waited for 196.375402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:35.914818  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m02
	I0803 23:51:35.914824  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:35.914832  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:35.914836  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:35.918263  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:35.918956  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:35.918981  346092 pod_ready.go:81] duration metric: took 399.560987ms for pod "kube-scheduler-ha-349588-m02" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:35.918996  346092 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:36.114089  346092 request.go:629] Waited for 195.010266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m03
	I0803 23:51:36.114169  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-349588-m03
	I0803 23:51:36.114176  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.114187  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.114203  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.117295  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:36.314318  346092 request.go:629] Waited for 196.362498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:36.314391  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes/ha-349588-m03
	I0803 23:51:36.314396  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.314405  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.314408  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.317683  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:36.318319  346092 pod_ready.go:92] pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace has status "Ready":"True"
	I0803 23:51:36.318338  346092 pod_ready.go:81] duration metric: took 399.336283ms for pod "kube-scheduler-ha-349588-m03" in "kube-system" namespace to be "Ready" ...
	I0803 23:51:36.318349  346092 pod_ready.go:38] duration metric: took 5.200478543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:51:36.318365  346092 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:51:36.318431  346092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:51:36.335947  346092 api_server.go:72] duration metric: took 19.077109461s to wait for apiserver process to appear ...
	I0803 23:51:36.335981  346092 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:51:36.336001  346092 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0803 23:51:36.342426  346092 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0803 23:51:36.342513  346092 round_trippers.go:463] GET https://192.168.39.168:8443/version
	I0803 23:51:36.342524  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.342534  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.342541  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.343354  346092 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0803 23:51:36.343424  346092 api_server.go:141] control plane version: v1.30.3
	I0803 23:51:36.343444  346092 api_server.go:131] duration metric: took 7.456114ms to wait for apiserver health ...
	I0803 23:51:36.343454  346092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 23:51:36.514719  346092 request.go:629] Waited for 171.163392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.514813  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.514819  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.514826  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.514831  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.521672  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:36.528449  346092 system_pods.go:59] 24 kube-system pods found
	I0803 23:51:36.528484  346092 system_pods.go:61] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:51:36.528488  346092 system_pods.go:61] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:51:36.528492  346092 system_pods.go:61] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:51:36.528496  346092 system_pods.go:61] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:51:36.528499  346092 system_pods.go:61] "etcd-ha-349588-m03" [b94d4e04-56f0-4892-927a-346559af3711] Running
	I0803 23:51:36.528502  346092 system_pods.go:61] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:51:36.528505  346092 system_pods.go:61] "kindnet-7sr59" [09355fc1-1a86-4f3f-be39-4e2e315e679f] Running
	I0803 23:51:36.528508  346092 system_pods.go:61] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:51:36.528511  346092 system_pods.go:61] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:51:36.528515  346092 system_pods.go:61] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:51:36.528518  346092 system_pods.go:61] "kube-apiserver-ha-349588-m03" [fb835dfe-b2d1-49ea-be6a-1c2f2c682095] Running
	I0803 23:51:36.528521  346092 system_pods.go:61] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:51:36.528524  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:51:36.528528  346092 system_pods.go:61] "kube-controller-manager-ha-349588-m03" [c4531c53-f3ca-42ef-a58b-1c30e752607b] Running
	I0803 23:51:36.528530  346092 system_pods.go:61] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:51:36.528533  346092 system_pods.go:61] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:51:36.528537  346092 system_pods.go:61] "kube-proxy-gxhmd" [4781a85e-af7c-49c2-80fb-c85db217189e] Running
	I0803 23:51:36.528540  346092 system_pods.go:61] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:51:36.528543  346092 system_pods.go:61] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:51:36.528549  346092 system_pods.go:61] "kube-scheduler-ha-349588-m03" [49495c84-d655-44a6-b732-a3520fc9e4db] Running
	I0803 23:51:36.528552  346092 system_pods.go:61] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:51:36.528555  346092 system_pods.go:61] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:51:36.528558  346092 system_pods.go:61] "kube-vip-ha-349588-m03" [17db3ee6-75d6-44a2-b663-22eb669c3916] Running
	I0803 23:51:36.528561  346092 system_pods.go:61] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:51:36.528567  346092 system_pods.go:74] duration metric: took 185.106343ms to wait for pod list to return data ...
	I0803 23:51:36.528578  346092 default_sa.go:34] waiting for default service account to be created ...
	I0803 23:51:36.714053  346092 request.go:629] Waited for 185.392294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:51:36.714147  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/default/serviceaccounts
	I0803 23:51:36.714158  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.714167  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.714172  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.718328  346092 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0803 23:51:36.718486  346092 default_sa.go:45] found service account: "default"
	I0803 23:51:36.718504  346092 default_sa.go:55] duration metric: took 189.92038ms for default service account to be created ...
	I0803 23:51:36.718512  346092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 23:51:36.914027  346092 request.go:629] Waited for 195.407927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.914096  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/namespaces/kube-system/pods
	I0803 23:51:36.914102  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:36.914112  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:36.914120  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:36.920598  346092 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0803 23:51:36.927121  346092 system_pods.go:86] 24 kube-system pods found
	I0803 23:51:36.927158  346092 system_pods.go:89] "coredns-7db6d8ff4d-fzmtg" [8ac3c975-02c6-485b-9cfa-d754718d255e] Running
	I0803 23:51:36.927164  346092 system_pods.go:89] "coredns-7db6d8ff4d-z8qt6" [ab1ff267-f331-4404-8610-50fb0680a2c5] Running
	I0803 23:51:36.927168  346092 system_pods.go:89] "etcd-ha-349588" [40229bdc-5c2b-4e53-899d-7cd9cb7e7bbd] Running
	I0803 23:51:36.927172  346092 system_pods.go:89] "etcd-ha-349588-m02" [4c84efdb-de11-4c4e-9633-08cbddaa9f68] Running
	I0803 23:51:36.927176  346092 system_pods.go:89] "etcd-ha-349588-m03" [b94d4e04-56f0-4892-927a-346559af3711] Running
	I0803 23:51:36.927181  346092 system_pods.go:89] "kindnet-2q4kc" [720b92aa-c5c9-4664-a163-7c94fd5b3a4d] Running
	I0803 23:51:36.927185  346092 system_pods.go:89] "kindnet-7sr59" [09355fc1-1a86-4f3f-be39-4e2e315e679f] Running
	I0803 23:51:36.927189  346092 system_pods.go:89] "kindnet-zqhp6" [659301da-5bc8-4246-b8f4-629a92b42508] Running
	I0803 23:51:36.927192  346092 system_pods.go:89] "kube-apiserver-ha-349588" [b11bc735-7a9a-4293-bc8c-4491a7ba030d] Running
	I0803 23:51:36.927196  346092 system_pods.go:89] "kube-apiserver-ha-349588-m02" [b8ce7573-4524-428d-90bf-292bde26ce27] Running
	I0803 23:51:36.927200  346092 system_pods.go:89] "kube-apiserver-ha-349588-m03" [fb835dfe-b2d1-49ea-be6a-1c2f2c682095] Running
	I0803 23:51:36.927205  346092 system_pods.go:89] "kube-controller-manager-ha-349588" [17ccb6e0-52a2-4e7f-80f6-be5a15feae7e] Running
	I0803 23:51:36.927211  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m02" [9f1b6f91-e81f-4e66-bbac-698722e26b0f] Running
	I0803 23:51:36.927217  346092 system_pods.go:89] "kube-controller-manager-ha-349588-m03" [c4531c53-f3ca-42ef-a58b-1c30e752607b] Running
	I0803 23:51:36.927222  346092 system_pods.go:89] "kube-proxy-bbzdt" [5f4d564f-843e-4284-a9fa-792241d9ba26] Running
	I0803 23:51:36.927227  346092 system_pods.go:89] "kube-proxy-gbg5q" [bf18e7f5-fe11-4421-9552-e6d6c5476aa3] Running
	I0803 23:51:36.927233  346092 system_pods.go:89] "kube-proxy-gxhmd" [4781a85e-af7c-49c2-80fb-c85db217189e] Running
	I0803 23:51:36.927239  346092 system_pods.go:89] "kube-scheduler-ha-349588" [87cf9f23-8ef4-4ac1-b408-b1b343398020] Running
	I0803 23:51:36.927246  346092 system_pods.go:89] "kube-scheduler-ha-349588-m02" [3c7bd1ea-e6e5-4876-b019-3518956f9014] Running
	I0803 23:51:36.927259  346092 system_pods.go:89] "kube-scheduler-ha-349588-m03" [49495c84-d655-44a6-b732-a3520fc9e4db] Running
	I0803 23:51:36.927264  346092 system_pods.go:89] "kube-vip-ha-349588" [b3a4c252-ee5e-4b2f-b982-a09904a9c547] Running
	I0803 23:51:36.927268  346092 system_pods.go:89] "kube-vip-ha-349588-m02" [f438bddb-41ff-46e7-9114-eba46b85d8fb] Running
	I0803 23:51:36.927275  346092 system_pods.go:89] "kube-vip-ha-349588-m03" [17db3ee6-75d6-44a2-b663-22eb669c3916] Running
	I0803 23:51:36.927285  346092 system_pods.go:89] "storage-provisioner" [e5eb5e5c-5ffb-4036-8a22-ed2204813520] Running
	I0803 23:51:36.927296  346092 system_pods.go:126] duration metric: took 208.777353ms to wait for k8s-apps to be running ...
	I0803 23:51:36.927304  346092 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 23:51:36.927363  346092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:51:36.945526  346092 system_svc.go:56] duration metric: took 18.195559ms WaitForService to wait for kubelet
	I0803 23:51:36.945565  346092 kubeadm.go:582] duration metric: took 19.686733073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:51:36.945591  346092 node_conditions.go:102] verifying NodePressure condition ...
	I0803 23:51:37.113811  346092 request.go:629] Waited for 168.118325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.168:8443/api/v1/nodes
	I0803 23:51:37.113911  346092 round_trippers.go:463] GET https://192.168.39.168:8443/api/v1/nodes
	I0803 23:51:37.113922  346092 round_trippers.go:469] Request Headers:
	I0803 23:51:37.113934  346092 round_trippers.go:473]     Accept: application/json, */*
	I0803 23:51:37.113943  346092 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0803 23:51:37.117855  346092 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0803 23:51:37.119104  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119131  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119165  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119171  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119180  346092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0803 23:51:37.119185  346092 node_conditions.go:123] node cpu capacity is 2
	I0803 23:51:37.119192  346092 node_conditions.go:105] duration metric: took 173.595468ms to run NodePressure ...
	I0803 23:51:37.119210  346092 start.go:241] waiting for startup goroutines ...
	I0803 23:51:37.119240  346092 start.go:255] writing updated cluster config ...
	I0803 23:51:37.119591  346092 ssh_runner.go:195] Run: rm -f paused
	I0803 23:51:37.173652  346092 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 23:51:37.175744  346092 out.go:177] * Done! kubectl is now configured to use "ha-349588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 03 23:56:07 ha-349588 crio[685]: time="2024-08-03 23:56:07.990806720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729367990780995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7f31650-1547-495b-a621-bc4434d27d54 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:07 ha-349588 crio[685]: time="2024-08-03 23:56:07.991440822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b560884b-a03f-49da-92fc-935f61b75c22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:07 ha-349588 crio[685]: time="2024-08-03 23:56:07.991535548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b560884b-a03f-49da-92fc-935f61b75c22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:07 ha-349588 crio[685]: time="2024-08-03 23:56:07.991799217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b560884b-a03f-49da-92fc-935f61b75c22 name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.030248017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92f22224-b44d-4e5e-a559-886b29b86394 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.030341683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92f22224-b44d-4e5e-a559-886b29b86394 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.031526505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f6c46de-4693-45ce-8448-5cc4863169d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.031971603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729368031950427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f6c46de-4693-45ce-8448-5cc4863169d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.032502808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4fb7c65-2aa7-4101-9195-a2474a6eab7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.032574864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4fb7c65-2aa7-4101-9195-a2474a6eab7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.032829827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4fb7c65-2aa7-4101-9195-a2474a6eab7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.074069179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=371daf17-d366-4851-bc94-d98ee59f4e96 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.074160416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=371daf17-d366-4851-bc94-d98ee59f4e96 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.075747521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48f8e94a-9a43-4843-a89c-f7255a120359 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.076192912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729368076169944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48f8e94a-9a43-4843-a89c-f7255a120359 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.077088043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aef4649-bc6e-4f43-8580-42e02083c2cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.077192335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aef4649-bc6e-4f43-8580-42e02083c2cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.077511220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3aef4649-bc6e-4f43-8580-42e02083c2cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.116874081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=170ad7d3-f6d8-4fe7-afb9-dd9bd0504ab3 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.116945333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=170ad7d3-f6d8-4fe7-afb9-dd9bd0504ab3 name=/runtime.v1.RuntimeService/Version
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.118636567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e96a1d11-b5b2-4cdd-8396-3359f4f48a14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.119091066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729368119068098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e96a1d11-b5b2-4cdd-8396-3359f4f48a14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.119912581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7999ecb5-07d1-49a3-8acf-7737760e6e2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.119981478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7999ecb5-07d1-49a3-8acf-7737760e6e2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 03 23:56:08 ha-349588 crio[685]: time="2024-08-03 23:56:08.120202162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729099665061085,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4f0996565c3dfaad1366923d76ecce3da0cb9ddf2f33bca9ed22fca6f9c30a,PodSandboxId:c29d357fc68b0286f6e350136649a7fe57ae29e3f690e75957b3b82e7c4d5885,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722728964608094471,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964592889323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722728964520215662,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f3
31-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722728952381271134,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172272894
8804756720,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440,PodSandboxId:c58f6f98744c895e81a8ada5022c3f2fb8af0896b21101dec18d8d51d8fb1b73,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172272893083
3943624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abdcf69362865525f307bf3fb05e99de,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722728928936328180,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35,PodSandboxId:5d722be95195feaa1f6a6230fbc1e971ed550ce25bbdcdac6cf5ef944be62340,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722728928851621247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722728928879059415,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2,PodSandboxId:b6a89d83c0aaf537d5f720c4c0da12b315ad202a46521e585cae1f60edec52f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722728928809973939,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7999ecb5-07d1-49a3-8acf-7737760e6e2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6fd002f59b0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   a2e2fb00f6b54       busybox-fc5497c4f-4mwk4
	ed4f0996565c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   c29d357fc68b0       storage-provisioner
	c780810d93e46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   37f34e1fe1b85       coredns-7db6d8ff4d-fzmtg
	81817890a62a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   925c168e44d83       coredns-7db6d8ff4d-z8qt6
	8706b763ebe33       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   d2e5e2b102cd4       kindnet-2q4kc
	1f48d6d5328f8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   842c0109e8643       kube-proxy-bbzdt
	4f4a81f925548       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c58f6f98744c8       kube-vip-ha-349588
	9bd785365c881       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   69dc19cc2bbff       etcd-ha-349588
	f061678087351       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   16e8a700bcd71       kube-scheduler-ha-349588
	c7a32eac14445       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   5d722be95195f       kube-apiserver-ha-349588
	1b3755f3d86ea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   b6a89d83c0aaf       kube-controller-manager-ha-349588
	
	
	==> coredns [81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87] <==
	[INFO] 10.244.0.4:58030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146295s
	[INFO] 10.244.0.4:57522 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004292718s
	[INFO] 10.244.0.4:60466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198733s
	[INFO] 10.244.0.4:45293 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002739449s
	[INFO] 10.244.0.4:50180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129872s
	[INFO] 10.244.2.2:56181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186686s
	[INFO] 10.244.2.2:56701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166229s
	[INFO] 10.244.2.2:38728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109023s
	[INFO] 10.244.2.2:45155 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001333912s
	[INFO] 10.244.2.2:51605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083342s
	[INFO] 10.244.1.2:38219 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015823s
	[INFO] 10.244.1.2:52488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178675s
	[INFO] 10.244.1.2:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097525s
	[INFO] 10.244.0.4:55438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074628s
	[INFO] 10.244.2.2:36883 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010754s
	[INFO] 10.244.2.2:53841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090252s
	[INFO] 10.244.2.2:59602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092585s
	[INFO] 10.244.1.2:59266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147793s
	[INFO] 10.244.1.2:44530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122943s
	[INFO] 10.244.0.4:42192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097553s
	[INFO] 10.244.2.2:40701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172686s
	[INFO] 10.244.2.2:38338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166475s
	[INFO] 10.244.2.2:58001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140105s
	[INFO] 10.244.2.2:51129 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105337s
	[INFO] 10.244.1.2:44130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106258s
	
	
	==> coredns [c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d] <==
	[INFO] 10.244.1.2:47738 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000122865s
	[INFO] 10.244.1.2:35251 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000545486s
	[INFO] 10.244.0.4:59904 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165239s
	[INFO] 10.244.0.4:38273 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132118s
	[INFO] 10.244.0.4:49517 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00021182s
	[INFO] 10.244.2.2:39556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137234s
	[INFO] 10.244.2.2:60582 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141615s
	[INFO] 10.244.2.2:36052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074574s
	[INFO] 10.244.1.2:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019702s
	[INFO] 10.244.1.2:39746 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827365s
	[INFO] 10.244.1.2:47114 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078787s
	[INFO] 10.244.1.2:38856 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198841s
	[INFO] 10.244.1.2:49149 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428046s
	[INFO] 10.244.0.4:47461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104433s
	[INFO] 10.244.0.4:47790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083369s
	[INFO] 10.244.0.4:39525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161056s
	[INFO] 10.244.2.2:58034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169362s
	[INFO] 10.244.1.2:44282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187567s
	[INFO] 10.244.1.2:48438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016257s
	[INFO] 10.244.0.4:52544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142962s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152657s
	[INFO] 10.244.0.4:45953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009439s
	[INFO] 10.244.1.2:57136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.1.2:58739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139508s
	[INFO] 10.244.1.2:50023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125422s
	
	
	==> describe nodes <==
	Name:               ha-349588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:56:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:51:59 +0000   Sat, 03 Aug 2024 23:49:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-349588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72ab11669b434797a5e41b5352f74be2
	  System UUID:                72ab1166-9b43-4797-a5e4-1b5352f74be2
	  Boot ID:                    e1637c60-2dbe-4ea9-949e-0f2b10f03d1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4mwk4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-fzmtg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m
	  kube-system                 coredns-7db6d8ff4d-z8qt6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m
	  kube-system                 etcd-ha-349588                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-2q4kc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m1s
	  kube-system                 kube-apiserver-ha-349588             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-controller-manager-ha-349588    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-proxy-bbzdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-scheduler-ha-349588             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-vip-ha-349588                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m59s  kube-proxy       
	  Normal  Starting                 7m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s  kubelet          Node ha-349588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet          Node ha-349588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet          Node ha-349588 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m1s   node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal  NodeReady                6m45s  kubelet          Node ha-349588 status is now: NodeReady
	  Normal  RegisteredNode           5m49s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal  RegisteredNode           4m37s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	
	
	Name:               ha-349588-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:49:59 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:52:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 03 Aug 2024 23:52:02 +0000   Sat, 03 Aug 2024 23:53:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-349588-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8919c8bcbd284472a3c4b5b3ae885051
	  System UUID:                8919c8bc-bd28-4472-a3c4-b5b3ae885051
	  Boot ID:                    000b155d-14ed-4044-bb42-b52680d7292c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-szvhv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-349588-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-zqhp6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m8s
	  kube-system                 kube-apiserver-ha-349588-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-ha-349588-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-proxy-gbg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-scheduler-ha-349588-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-vip-ha-349588-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m9s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m9s)  kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m9s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m6s                 node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           5m49s                node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           4m37s                node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  NodeNotReady             2m32s                node-controller  Node ha-349588-m02 status is now: NodeNotReady
	
	
	Name:               ha-349588-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_51_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:55:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:51:43 +0000   Sat, 03 Aug 2024 23:51:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-349588-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43f3523f989d4c49bec19f93fe176e08
	  System UUID:                43f3523f-989d-4c49-bec1-9f93fe176e08
	  Boot ID:                    49cb00cd-1df4-4d0c-b32a-0575118d2aca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mlkx9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-349588-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-7sr59                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m54s
	  kube-system                 kube-apiserver-ha-349588-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-ha-349588-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-gxhmd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-ha-349588-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-vip-ha-349588-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-349588-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal  RegisteredNode           4m37s                  node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	
	
	Name:               ha-349588-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_52_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:52:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:52:47 +0000   Sat, 03 Aug 2024 23:52:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-349588-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac9326af96243febea155e979b68343
	  System UUID:                4ac9326a-f962-43fe-bea1-55e979b68343
	  Boot ID:                    e2f3d546-daab-46ec-be7d-1fdf0a72df36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7rfzm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-2sdf6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m52s (x2 over 3m52s)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x2 over 3m52s)  kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x2 over 3m52s)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-349588-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 3 23:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051792] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040861] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.793620] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.513980] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.584703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.778088] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.061103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063697] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.170133] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.139803] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.274186] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.334862] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.414847] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.686183] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.066614] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.504623] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[Aug 3 23:49] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.728228] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.925424] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70] <==
	{"level":"warn","ts":"2024-08-03T23:56:08.335279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.385252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.410897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.420942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.427577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.450011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.461424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.468791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.472994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.475953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.485759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.486828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.4932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.499286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.503603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.507262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.51513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.521813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.528019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.531441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.534503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.539559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.546005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.552698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-03T23:56:08.586537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"e34fba8f5739efe8","from":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:56:08 up 7 min,  0 users,  load average: 0.37, 0.34, 0.18
	Linux ha-349588 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a] <==
	I0803 23:55:33.551316       1 main.go:299] handling current node
	I0803 23:55:43.552214       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:55:43.552339       1 main.go:299] handling current node
	I0803 23:55:43.552603       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:55:43.552694       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:55:43.553190       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:55:43.553648       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:55:43.553880       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:55:43.553923       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:55:53.546317       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:55:53.546500       1 main.go:299] handling current node
	I0803 23:55:53.546536       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:55:53.546572       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:55:53.546734       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:55:53.546756       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:55:53.546822       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:55:53.546851       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:56:03.550708       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:56:03.550746       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:56:03.550918       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:56:03.550944       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:56:03.551028       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:56:03.551048       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:56:03.551117       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:56:03.551147       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35] <==
	W0803 23:48:53.926522       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168]
	I0803 23:48:53.927634       1 controller.go:615] quota admission added evaluator for: endpoints
	I0803 23:48:53.932255       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:48:54.119070       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0803 23:48:55.111182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0803 23:48:55.144783       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0803 23:48:55.170847       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0803 23:49:07.575208       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:49:08.182339       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0803 23:51:41.064985       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49338: use of closed network connection
	E0803 23:51:41.285771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49360: use of closed network connection
	E0803 23:51:41.481817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49368: use of closed network connection
	E0803 23:51:41.673398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49382: use of closed network connection
	E0803 23:51:41.860608       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49400: use of closed network connection
	E0803 23:51:42.066800       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49404: use of closed network connection
	E0803 23:51:42.265786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49410: use of closed network connection
	E0803 23:51:42.477198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49424: use of closed network connection
	E0803 23:51:42.661794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49434: use of closed network connection
	E0803 23:51:42.962222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49456: use of closed network connection
	E0803 23:51:43.153260       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49474: use of closed network connection
	E0803 23:51:43.341009       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49484: use of closed network connection
	E0803 23:51:43.542718       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:49500: use of closed network connection
	E0803 23:51:43.745794       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55872: use of closed network connection
	E0803 23:51:43.941278       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55888: use of closed network connection
	W0803 23:53:03.929989       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168 192.168.39.79]
	
	
	==> kube-controller-manager [1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2] <==
	I0803 23:51:12.392050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-349588-m03\" does not exist"
	I0803 23:51:12.409979       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-349588-m03" podCIDRs=["10.244.2.0/24"]
	I0803 23:51:12.615890       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m03"
	I0803 23:51:38.099794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.870305ms"
	I0803 23:51:38.149470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.332864ms"
	I0803 23:51:38.150864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="656.77µs"
	I0803 23:51:38.180131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.73µs"
	I0803 23:51:38.336907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="149.035867ms"
	I0803 23:51:38.494837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.68925ms"
	I0803 23:51:38.552625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.651091ms"
	I0803 23:51:38.552748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.712µs"
	I0803 23:51:39.851012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.989391ms"
	I0803 23:51:39.851245       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.684µs"
	I0803 23:51:39.955292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.091µs"
	I0803 23:51:40.029088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.487775ms"
	I0803 23:51:40.029453       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.265µs"
	I0803 23:51:40.582519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.030815ms"
	I0803 23:51:40.582675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.813µs"
	I0803 23:52:16.772216       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-349588-m04\" does not exist"
	I0803 23:52:16.799833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-349588-m04" podCIDRs=["10.244.3.0/24"]
	I0803 23:52:17.978544       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m04"
	I0803 23:52:35.312225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-349588-m04"
	I0803 23:53:36.959632       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-349588-m04"
	I0803 23:53:37.137008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.48034ms"
	I0803 23:53:37.137626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.676µs"
	
	
	==> kube-proxy [1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511] <==
	I0803 23:49:09.173626       1 server_linux.go:69] "Using iptables proxy"
	I0803 23:49:09.204726       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.168"]
	I0803 23:49:09.262456       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0803 23:49:09.262510       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0803 23:49:09.262529       1 server_linux.go:165] "Using iptables Proxier"
	I0803 23:49:09.265850       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 23:49:09.266449       1 server.go:872] "Version info" version="v1.30.3"
	I0803 23:49:09.266491       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:49:09.267962       1 config.go:192] "Starting service config controller"
	I0803 23:49:09.268231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 23:49:09.268329       1 config.go:101] "Starting endpoint slice config controller"
	I0803 23:49:09.268413       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 23:49:09.270529       1 config.go:319] "Starting node config controller"
	I0803 23:49:09.270556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 23:49:09.369309       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 23:49:09.369490       1 shared_informer.go:320] Caches are synced for service config
	I0803 23:49:09.372278       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802] <==
	W0803 23:48:53.304668       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:48:53.304715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 23:48:53.354479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:48:53.354578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:48:53.485317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:48:53.485445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:48:53.514811       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:48:53.514858       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:48:55.773274       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0803 23:51:12.541229       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6mxhx\": pod kube-proxy-6mxhx is already assigned to node \"ha-349588-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6mxhx" node="ha-349588-m03"
	E0803 23:51:12.542506       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e0f96924-b772-456b-b2f6-698af8e94038(kube-system/kube-proxy-6mxhx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6mxhx"
	E0803 23:51:12.543622       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6mxhx\": pod kube-proxy-6mxhx is already assigned to node \"ha-349588-m03\"" pod="kube-system/kube-proxy-6mxhx"
	I0803 23:51:12.543720       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6mxhx" node="ha-349588-m03"
	E0803 23:52:16.874914       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7rfzm\": pod kindnet-7rfzm is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-7rfzm" node="ha-349588-m04"
	E0803 23:52:16.875424       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b882822a-1717-446e-9816-b0d709515f5a(kube-system/kindnet-7rfzm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7rfzm"
	E0803 23:52:16.876997       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7rfzm\": pod kindnet-7rfzm is already assigned to node \"ha-349588-m04\"" pod="kube-system/kindnet-7rfzm"
	I0803 23:52:16.877333       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7rfzm" node="ha-349588-m04"
	E0803 23:52:16.874987       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2sdf6\": pod kube-proxy-2sdf6 is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2sdf6" node="ha-349588-m04"
	E0803 23:52:16.878219       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2c41bdec-3f55-4626-9c5b-b757faed7907(kube-system/kube-proxy-2sdf6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-2sdf6"
	E0803 23:52:16.878316       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2sdf6\": pod kube-proxy-2sdf6 is already assigned to node \"ha-349588-m04\"" pod="kube-system/kube-proxy-2sdf6"
	I0803 23:52:16.878440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-2sdf6" node="ha-349588-m04"
	E0803 23:52:17.021480       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6rctf\": pod kube-proxy-6rctf is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6rctf" node="ha-349588-m04"
	E0803 23:52:17.021686       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 22d8b275-6e92-4f89-85b5-5138eb55855b(kube-system/kube-proxy-6rctf) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6rctf"
	E0803 23:52:17.021811       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6rctf\": pod kube-proxy-6rctf is already assigned to node \"ha-349588-m04\"" pod="kube-system/kube-proxy-6rctf"
	I0803 23:52:17.022047       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6rctf" node="ha-349588-m04"
	
	
	==> kubelet <==
	Aug 03 23:51:55 ha-349588 kubelet[1373]: E0803 23:51:55.143837    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:51:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:51:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:52:55 ha-349588 kubelet[1373]: E0803 23:52:55.144049    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:52:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:52:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:53:55 ha-349588 kubelet[1373]: E0803 23:53:55.154670    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:53:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:53:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:54:55 ha-349588 kubelet[1373]: E0803 23:54:55.142276    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:54:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:54:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 03 23:55:55 ha-349588 kubelet[1373]: E0803 23:55:55.140475    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 03 23:55:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 03 23:55:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 03 23:55:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 03 23:55:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-349588 -n ha-349588
helpers_test.go:261: (dbg) Run:  kubectl --context ha-349588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-349588 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-349588 -v=7 --alsologtostderr
E0803 23:57:24.417150  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:57:52.102775  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-349588 -v=7 --alsologtostderr: exit status 82 (2m1.938819408s)

                                                
                                                
-- stdout --
	* Stopping node "ha-349588-m04"  ...
	* Stopping node "ha-349588-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:56:10.074171  351855 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:56:10.074418  351855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:56:10.074426  351855 out.go:304] Setting ErrFile to fd 2...
	I0803 23:56:10.074430  351855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:56:10.074648  351855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:56:10.074927  351855 out.go:298] Setting JSON to false
	I0803 23:56:10.075020  351855 mustload.go:65] Loading cluster: ha-349588
	I0803 23:56:10.075378  351855 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:56:10.075460  351855 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:56:10.075693  351855 mustload.go:65] Loading cluster: ha-349588
	I0803 23:56:10.075865  351855 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:56:10.075901  351855 stop.go:39] StopHost: ha-349588-m04
	I0803 23:56:10.076344  351855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:10.076386  351855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:10.092252  351855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I0803 23:56:10.092807  351855 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:10.093380  351855 main.go:141] libmachine: Using API Version  1
	I0803 23:56:10.093402  351855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:10.093851  351855 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:10.096715  351855 out.go:177] * Stopping node "ha-349588-m04"  ...
	I0803 23:56:10.098734  351855 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:56:10.098803  351855 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0803 23:56:10.099248  351855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:56:10.099281  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0803 23:56:10.102534  351855 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:10.103050  351855 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:51:59 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0803 23:56:10.103087  351855 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0803 23:56:10.103258  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0803 23:56:10.103467  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0803 23:56:10.103653  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0803 23:56:10.103839  351855 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0803 23:56:10.189465  351855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:56:10.245949  351855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:56:10.301623  351855 main.go:141] libmachine: Stopping "ha-349588-m04"...
	I0803 23:56:10.301663  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:56:10.303199  351855 main.go:141] libmachine: (ha-349588-m04) Calling .Stop
	I0803 23:56:10.306883  351855 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 0/120
	I0803 23:56:11.533415  351855 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0803 23:56:11.534761  351855 main.go:141] libmachine: Machine "ha-349588-m04" was stopped.
	I0803 23:56:11.534783  351855 stop.go:75] duration metric: took 1.436056218s to stop
	I0803 23:56:11.534828  351855 stop.go:39] StopHost: ha-349588-m03
	I0803 23:56:11.535146  351855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:56:11.535195  351855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:56:11.550630  351855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0803 23:56:11.551167  351855 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:56:11.551691  351855 main.go:141] libmachine: Using API Version  1
	I0803 23:56:11.551712  351855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:56:11.552072  351855 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:56:11.553925  351855 out.go:177] * Stopping node "ha-349588-m03"  ...
	I0803 23:56:11.555118  351855 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0803 23:56:11.555146  351855 main.go:141] libmachine: (ha-349588-m03) Calling .DriverName
	I0803 23:56:11.555384  351855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0803 23:56:11.555412  351855 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHHostname
	I0803 23:56:11.558429  351855 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:11.558898  351855 main.go:141] libmachine: (ha-349588-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:c9:03", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:50:38 +0000 UTC Type:0 Mac:52:54:00:1d:c9:03 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-349588-m03 Clientid:01:52:54:00:1d:c9:03}
	I0803 23:56:11.558922  351855 main.go:141] libmachine: (ha-349588-m03) DBG | domain ha-349588-m03 has defined IP address 192.168.39.79 and MAC address 52:54:00:1d:c9:03 in network mk-ha-349588
	I0803 23:56:11.559055  351855 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHPort
	I0803 23:56:11.559245  351855 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHKeyPath
	I0803 23:56:11.559382  351855 main.go:141] libmachine: (ha-349588-m03) Calling .GetSSHUsername
	I0803 23:56:11.559532  351855 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m03/id_rsa Username:docker}
	I0803 23:56:11.649169  351855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0803 23:56:11.703079  351855 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0803 23:56:11.757466  351855 main.go:141] libmachine: Stopping "ha-349588-m03"...
	I0803 23:56:11.757496  351855 main.go:141] libmachine: (ha-349588-m03) Calling .GetState
	I0803 23:56:11.759132  351855 main.go:141] libmachine: (ha-349588-m03) Calling .Stop
	I0803 23:56:11.762630  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 0/120
	I0803 23:56:12.764027  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 1/120
	I0803 23:56:13.765526  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 2/120
	I0803 23:56:14.766981  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 3/120
	I0803 23:56:15.768725  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 4/120
	I0803 23:56:16.770671  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 5/120
	I0803 23:56:17.772695  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 6/120
	I0803 23:56:18.774307  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 7/120
	I0803 23:56:19.775701  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 8/120
	I0803 23:56:20.777168  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 9/120
	I0803 23:56:21.778666  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 10/120
	I0803 23:56:22.780228  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 11/120
	I0803 23:56:23.781623  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 12/120
	I0803 23:56:24.783259  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 13/120
	I0803 23:56:25.784873  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 14/120
	I0803 23:56:26.786740  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 15/120
	I0803 23:56:27.788276  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 16/120
	I0803 23:56:28.789750  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 17/120
	I0803 23:56:29.792410  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 18/120
	I0803 23:56:30.793946  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 19/120
	I0803 23:56:31.795854  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 20/120
	I0803 23:56:32.797352  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 21/120
	I0803 23:56:33.798805  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 22/120
	I0803 23:56:34.800821  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 23/120
	I0803 23:56:35.802481  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 24/120
	I0803 23:56:36.804767  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 25/120
	I0803 23:56:37.806446  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 26/120
	I0803 23:56:38.807954  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 27/120
	I0803 23:56:39.809535  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 28/120
	I0803 23:56:40.810997  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 29/120
	I0803 23:56:41.813122  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 30/120
	I0803 23:56:42.814907  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 31/120
	I0803 23:56:43.816496  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 32/120
	I0803 23:56:44.817947  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 33/120
	I0803 23:56:45.819493  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 34/120
	I0803 23:56:46.821585  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 35/120
	I0803 23:56:47.822969  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 36/120
	I0803 23:56:48.824352  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 37/120
	I0803 23:56:49.825815  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 38/120
	I0803 23:56:50.828022  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 39/120
	I0803 23:56:51.829789  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 40/120
	I0803 23:56:52.831403  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 41/120
	I0803 23:56:53.832709  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 42/120
	I0803 23:56:54.834207  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 43/120
	I0803 23:56:55.835589  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 44/120
	I0803 23:56:56.836993  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 45/120
	I0803 23:56:57.838466  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 46/120
	I0803 23:56:58.839883  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 47/120
	I0803 23:56:59.841344  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 48/120
	I0803 23:57:00.842668  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 49/120
	I0803 23:57:01.844624  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 50/120
	I0803 23:57:02.846002  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 51/120
	I0803 23:57:03.847350  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 52/120
	I0803 23:57:04.848817  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 53/120
	I0803 23:57:05.850543  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 54/120
	I0803 23:57:06.852621  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 55/120
	I0803 23:57:07.854064  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 56/120
	I0803 23:57:08.855681  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 57/120
	I0803 23:57:09.857318  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 58/120
	I0803 23:57:10.858816  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 59/120
	I0803 23:57:11.860805  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 60/120
	I0803 23:57:12.862326  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 61/120
	I0803 23:57:13.863748  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 62/120
	I0803 23:57:14.865121  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 63/120
	I0803 23:57:15.866535  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 64/120
	I0803 23:57:16.868449  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 65/120
	I0803 23:57:17.869940  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 66/120
	I0803 23:57:18.871460  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 67/120
	I0803 23:57:19.873007  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 68/120
	I0803 23:57:20.874475  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 69/120
	I0803 23:57:21.876197  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 70/120
	I0803 23:57:22.877627  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 71/120
	I0803 23:57:23.879169  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 72/120
	I0803 23:57:24.880595  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 73/120
	I0803 23:57:25.882004  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 74/120
	I0803 23:57:26.883756  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 75/120
	I0803 23:57:27.885495  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 76/120
	I0803 23:57:28.886769  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 77/120
	I0803 23:57:29.888449  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 78/120
	I0803 23:57:30.889867  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 79/120
	I0803 23:57:31.891213  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 80/120
	I0803 23:57:32.893027  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 81/120
	I0803 23:57:33.894583  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 82/120
	I0803 23:57:34.896134  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 83/120
	I0803 23:57:35.897579  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 84/120
	I0803 23:57:36.899426  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 85/120
	I0803 23:57:37.900757  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 86/120
	I0803 23:57:38.902322  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 87/120
	I0803 23:57:39.904160  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 88/120
	I0803 23:57:40.905756  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 89/120
	I0803 23:57:41.907631  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 90/120
	I0803 23:57:42.909061  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 91/120
	I0803 23:57:43.910578  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 92/120
	I0803 23:57:44.912529  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 93/120
	I0803 23:57:45.914040  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 94/120
	I0803 23:57:46.915983  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 95/120
	I0803 23:57:47.917398  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 96/120
	I0803 23:57:48.919264  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 97/120
	I0803 23:57:49.920626  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 98/120
	I0803 23:57:50.922189  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 99/120
	I0803 23:57:51.924175  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 100/120
	I0803 23:57:52.925799  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 101/120
	I0803 23:57:53.927155  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 102/120
	I0803 23:57:54.928561  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 103/120
	I0803 23:57:55.929999  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 104/120
	I0803 23:57:56.931959  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 105/120
	I0803 23:57:57.933474  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 106/120
	I0803 23:57:58.935295  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 107/120
	I0803 23:57:59.936730  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 108/120
	I0803 23:58:00.938404  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 109/120
	I0803 23:58:01.940068  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 110/120
	I0803 23:58:02.941210  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 111/120
	I0803 23:58:03.942663  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 112/120
	I0803 23:58:04.943863  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 113/120
	I0803 23:58:05.945968  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 114/120
	I0803 23:58:06.947662  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 115/120
	I0803 23:58:07.949068  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 116/120
	I0803 23:58:08.950333  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 117/120
	I0803 23:58:09.951842  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 118/120
	I0803 23:58:10.953147  351855 main.go:141] libmachine: (ha-349588-m03) Waiting for machine to stop 119/120
	I0803 23:58:11.954158  351855 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0803 23:58:11.954248  351855 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0803 23:58:11.956274  351855 out.go:177] 
	W0803 23:58:11.957994  351855 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0803 23:58:11.958011  351855 out.go:239] * 
	* 
	W0803 23:58:11.963532  351855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 23:58:11.964981  351855 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-349588 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-349588 --wait=true -v=7 --alsologtostderr
E0804 00:02:24.416874  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-349588 --wait=true -v=7 --alsologtostderr: (4m46.656026057s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-349588
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-349588 -n ha-349588
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-349588 logs -n 25: (1.94475387s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m04 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp testdata/cp-test.txt                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m04_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03:/home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m03 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-349588 node stop m02 -v=7                                                     | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-349588 node start m02 -v=7                                                    | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-349588 -v=7                                                           | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-349588 -v=7                                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-349588 --wait=true -v=7                                                    | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-349588                                                                | ha-349588 | jenkins | v1.33.1 | 04 Aug 24 00:02 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:58:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:58:12.022178  352373 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:58:12.022292  352373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:12.022299  352373 out.go:304] Setting ErrFile to fd 2...
	I0803 23:58:12.022303  352373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:12.022473  352373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:58:12.023127  352373 out.go:298] Setting JSON to false
	I0803 23:58:12.024133  352373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31240,"bootTime":1722698252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:58:12.024204  352373 start.go:139] virtualization: kvm guest
	I0803 23:58:12.027664  352373 out.go:177] * [ha-349588] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:58:12.029202  352373 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:58:12.029203  352373 notify.go:220] Checking for updates...
	I0803 23:58:12.031644  352373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:58:12.032899  352373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:58:12.034081  352373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:58:12.035387  352373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:58:12.036532  352373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:58:12.038258  352373 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:12.038407  352373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:58:12.039001  352373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:58:12.039073  352373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:58:12.054915  352373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0803 23:58:12.055503  352373 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:58:12.056016  352373 main.go:141] libmachine: Using API Version  1
	I0803 23:58:12.056041  352373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:58:12.056413  352373 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:58:12.056606  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.094583  352373 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:58:12.095757  352373 start.go:297] selected driver: kvm2
	I0803 23:58:12.095770  352373 start.go:901] validating driver "kvm2" against &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:58:12.095967  352373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:58:12.096297  352373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:12.096362  352373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:58:12.112600  352373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:58:12.113337  352373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:58:12.113371  352373 cni.go:84] Creating CNI manager for ""
	I0803 23:58:12.113385  352373 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:58:12.113443  352373 start.go:340] cluster config:
	{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:58:12.113601  352373 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:12.116189  352373 out.go:177] * Starting "ha-349588" primary control-plane node in "ha-349588" cluster
	I0803 23:58:12.117315  352373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:58:12.117352  352373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:58:12.117367  352373 cache.go:56] Caching tarball of preloaded images
	I0803 23:58:12.117466  352373 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:58:12.117480  352373 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:58:12.117627  352373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:58:12.117832  352373 start.go:360] acquireMachinesLock for ha-349588: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:58:12.117881  352373 start.go:364] duration metric: took 26.78µs to acquireMachinesLock for "ha-349588"
	I0803 23:58:12.117897  352373 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:58:12.117907  352373 fix.go:54] fixHost starting: 
	I0803 23:58:12.118177  352373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:58:12.118211  352373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:58:12.133364  352373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0803 23:58:12.133900  352373 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:58:12.134523  352373 main.go:141] libmachine: Using API Version  1
	I0803 23:58:12.134541  352373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:58:12.134872  352373 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:58:12.135109  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.135396  352373 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:58:12.137082  352373 fix.go:112] recreateIfNeeded on ha-349588: state=Running err=<nil>
	W0803 23:58:12.137102  352373 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:58:12.139101  352373 out.go:177] * Updating the running kvm2 "ha-349588" VM ...
	I0803 23:58:12.140417  352373 machine.go:94] provisionDockerMachine start ...
	I0803 23:58:12.140450  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.140695  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.143880  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.144404  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.144430  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.144592  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.144790  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.144988  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.145126  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.145298  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.145499  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.145526  352373 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:58:12.247449  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:58:12.247493  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.247877  352373 buildroot.go:166] provisioning hostname "ha-349588"
	I0803 23:58:12.247907  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.248085  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.251168  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.251668  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.251694  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.251903  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.252086  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.252217  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.252417  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.252584  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.252797  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.252811  352373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588 && echo "ha-349588" | sudo tee /etc/hostname
	I0803 23:58:12.368965  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:58:12.369011  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.371903  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.372370  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.372399  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.372524  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.372725  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.372889  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.373012  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.373171  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.373378  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.373401  352373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:58:12.474819  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:58:12.474856  352373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:58:12.474919  352373 buildroot.go:174] setting up certificates
	I0803 23:58:12.474936  352373 provision.go:84] configureAuth start
	I0803 23:58:12.474972  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.475296  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:58:12.478143  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.478575  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.478596  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.478767  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.481238  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.481583  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.481614  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.481806  352373 provision.go:143] copyHostCerts
	I0803 23:58:12.481846  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:58:12.481897  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:58:12.481911  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:58:12.481997  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:58:12.482102  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:58:12.482138  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:58:12.482148  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:58:12.482189  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:58:12.482266  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:58:12.482289  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:58:12.482296  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:58:12.482361  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:58:12.482435  352373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588 san=[127.0.0.1 192.168.39.168 ha-349588 localhost minikube]
	I0803 23:58:12.658472  352373 provision.go:177] copyRemoteCerts
	I0803 23:58:12.658569  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:58:12.658606  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.661190  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.661587  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.661615  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.661776  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.661956  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.662094  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.662208  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:58:12.745563  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:58:12.745670  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0803 23:58:12.775983  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:58:12.776066  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:58:12.802827  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:58:12.802895  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:58:12.832413  352373 provision.go:87] duration metric: took 357.447101ms to configureAuth
	I0803 23:58:12.832443  352373 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:58:12.832675  352373 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:12.832765  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.835659  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.836019  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.836046  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.836266  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.836492  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.836671  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.836856  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.837022  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.837209  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.837229  352373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:59:43.753336  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:59:43.753367  352373 machine.go:97] duration metric: took 1m31.612933562s to provisionDockerMachine
	I0803 23:59:43.753388  352373 start.go:293] postStartSetup for "ha-349588" (driver="kvm2")
	I0803 23:59:43.753405  352373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:59:43.753423  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:43.753965  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:59:43.754005  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.757345  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.757858  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.757889  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.758100  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.758317  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.758476  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.758636  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:43.838432  352373 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:59:43.842968  352373 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:59:43.842999  352373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:59:43.843063  352373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:59:43.843144  352373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:59:43.843155  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:59:43.843243  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:59:43.853725  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:59:43.877762  352373 start.go:296] duration metric: took 124.357505ms for postStartSetup
	I0803 23:59:43.877813  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:43.878152  352373 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0803 23:59:43.878184  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.880917  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.881418  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.881445  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.881723  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.881952  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.882117  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.882244  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	W0803 23:59:43.960493  352373 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0803 23:59:43.960526  352373 fix.go:56] duration metric: took 1m31.84261873s for fixHost
	I0803 23:59:43.960555  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.963234  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.963603  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.963626  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.963804  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.964018  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.964149  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.964270  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.964427  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:43.964597  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:59:43.964608  352373 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:59:44.062478  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729584.031748852
	
	I0803 23:59:44.062506  352373 fix.go:216] guest clock: 1722729584.031748852
	I0803 23:59:44.062515  352373 fix.go:229] Guest: 2024-08-03 23:59:44.031748852 +0000 UTC Remote: 2024-08-03 23:59:43.960535295 +0000 UTC m=+91.982163429 (delta=71.213557ms)
	I0803 23:59:44.062577  352373 fix.go:200] guest clock delta is within tolerance: 71.213557ms
	I0803 23:59:44.062590  352373 start.go:83] releasing machines lock for "ha-349588", held for 1m31.944698887s
	I0803 23:59:44.062620  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.062932  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:59:44.065706  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.066164  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.066191  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.066341  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.066931  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.067128  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.067235  352373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:59:44.067276  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:44.067318  352373 ssh_runner.go:195] Run: cat /version.json
	I0803 23:59:44.067339  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:44.069940  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070217  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070396  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.070422  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070606  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:44.070605  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.070664  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070774  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:44.070829  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:44.070935  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:44.071008  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:44.071088  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:44.071133  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:44.071268  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:44.146572  352373 ssh_runner.go:195] Run: systemctl --version
	I0803 23:59:44.171937  352373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:59:44.336519  352373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:59:44.346120  352373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:59:44.346208  352373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:59:44.356254  352373 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:59:44.356282  352373 start.go:495] detecting cgroup driver to use...
	I0803 23:59:44.356346  352373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:59:44.373082  352373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:59:44.387351  352373 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:59:44.387417  352373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:59:44.401478  352373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:59:44.415579  352373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:59:44.571115  352373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:59:44.723860  352373 docker.go:233] disabling docker service ...
	I0803 23:59:44.723939  352373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:59:44.742115  352373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:59:44.756814  352373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:59:44.908053  352373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:59:45.059689  352373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:59:45.074602  352373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:59:45.095728  352373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:59:45.095814  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.106977  352373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:59:45.107044  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.117790  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.128930  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.140024  352373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:59:45.151462  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.162651  352373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.174932  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.185902  352373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:59:45.196192  352373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:59:45.206390  352373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:45.353161  352373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:59:46.102412  352373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:59:46.102487  352373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:59:46.107496  352373 start.go:563] Will wait 60s for crictl version
	I0803 23:59:46.107568  352373 ssh_runner.go:195] Run: which crictl
	I0803 23:59:46.111637  352373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:59:46.155875  352373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:59:46.155975  352373 ssh_runner.go:195] Run: crio --version
	I0803 23:59:46.188694  352373 ssh_runner.go:195] Run: crio --version
	I0803 23:59:46.222413  352373 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:59:46.223676  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:59:46.226489  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:46.226844  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:46.226873  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:46.227137  352373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:59:46.232353  352373 kubeadm.go:883] updating cluster {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:59:46.232528  352373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:59:46.232587  352373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:46.278029  352373 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:59:46.278054  352373 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:59:46.278124  352373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:46.312783  352373 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:59:46.312808  352373 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:59:46.312819  352373 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.30.3 crio true true} ...
	I0803 23:59:46.312950  352373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:59:46.313044  352373 ssh_runner.go:195] Run: crio config
	I0803 23:59:46.360725  352373 cni.go:84] Creating CNI manager for ""
	I0803 23:59:46.360752  352373 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:59:46.360763  352373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:59:46.360784  352373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-349588 NodeName:ha-349588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:59:46.360952  352373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-349588"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:59:46.360992  352373 kube-vip.go:115] generating kube-vip config ...
	I0803 23:59:46.361037  352373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:59:46.373460  352373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:59:46.373582  352373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:59:46.373636  352373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:59:46.383568  352373 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:59:46.383652  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:59:46.393526  352373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:59:46.411264  352373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:59:46.428022  352373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:59:46.445354  352373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:59:46.466565  352373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:59:46.471098  352373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:46.658268  352373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:59:46.700997  352373 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.168
	I0803 23:59:46.701023  352373 certs.go:194] generating shared ca certs ...
	I0803 23:59:46.701046  352373 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:46.701245  352373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:59:46.701286  352373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:59:46.701297  352373 certs.go:256] generating profile certs ...
	I0803 23:59:46.701384  352373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:59:46.701412  352373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e
	I0803 23:59:46.701427  352373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.79 192.168.39.254]
	I0803 23:59:47.006363  352373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e ...
	I0803 23:59:47.006400  352373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e: {Name:mk9f786535d9505912931877f662c0753dc060a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:47.006607  352373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e ...
	I0803 23:59:47.006623  352373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e: {Name:mk53d572a9fcd78e381f03b68adb9818446cf961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:47.006734  352373 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:59:47.006908  352373 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:59:47.007047  352373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:59:47.007065  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:59:47.007078  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:59:47.007095  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:59:47.007108  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:59:47.007120  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:59:47.007139  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:59:47.007151  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:59:47.007165  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:59:47.007216  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:59:47.007250  352373 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:59:47.007260  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:59:47.007283  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:59:47.007310  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:59:47.007332  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:59:47.007368  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:59:47.007398  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.007412  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.007424  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.008052  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:59:47.034098  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:59:47.058909  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:59:47.085685  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:59:47.112370  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0803 23:59:47.138782  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:59:47.169235  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:59:47.200844  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:59:47.230235  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:59:47.255963  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:59:47.281423  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:59:47.306580  352373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:59:47.325420  352373 ssh_runner.go:195] Run: openssl version
	I0803 23:59:47.331786  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:59:47.342995  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.347801  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.347868  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.353623  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:59:47.363634  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:59:47.375915  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.380634  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.380695  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.386810  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:59:47.396622  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:59:47.407872  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.412589  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.412670  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.418613  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:59:47.428554  352373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:59:47.433703  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:59:47.439732  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:59:47.445548  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:59:47.451215  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:59:47.457185  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:59:47.463120  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:59:47.469073  352373 kubeadm.go:392] StartCluster: {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:59:47.469198  352373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:59:47.469269  352373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:59:47.509392  352373 cri.go:89] found id: "7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b"
	I0803 23:59:47.509416  352373 cri.go:89] found id: "a686a7f580b893d079fe4bd4ed0bae85431691790cbec289aaa1977f360c9683"
	I0803 23:59:47.509421  352373 cri.go:89] found id: "2be249bcb71a100c9a2a9452201d928fe6cba9c61ba486bc247249ae3fc2c5c9"
	I0803 23:59:47.509426  352373 cri.go:89] found id: "fdeef773baa1e9761ab366e53254f76d1f1a91972bc400d6b218dbbd70218061"
	I0803 23:59:47.509430  352373 cri.go:89] found id: "b54ac9de6d3da1774b24b9e2ba6fc0b56ea3cf76f8e6076e59c82e252d3100ba"
	I0803 23:59:47.509434  352373 cri.go:89] found id: "c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d"
	I0803 23:59:47.509439  352373 cri.go:89] found id: "81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87"
	I0803 23:59:47.509443  352373 cri.go:89] found id: "8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a"
	I0803 23:59:47.509447  352373 cri.go:89] found id: "1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511"
	I0803 23:59:47.509455  352373 cri.go:89] found id: "4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440"
	I0803 23:59:47.509459  352373 cri.go:89] found id: "9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70"
	I0803 23:59:47.509468  352373 cri.go:89] found id: "f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802"
	I0803 23:59:47.509475  352373 cri.go:89] found id: "c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35"
	I0803 23:59:47.509479  352373 cri.go:89] found id: "1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2"
	I0803 23:59:47.509488  352373 cri.go:89] found id: ""
	I0803 23:59:47.509563  352373 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.454228820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4333cc27-45cb-43b0-ad38-50ca79cec148 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.455269468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98160931-fec9-45bb-8027-881d1fcd6cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.455930201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729779455901548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98160931-fec9-45bb-8027-881d1fcd6cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.456809269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=faafd72b-812f-4819-b8b0-2ff9886af137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.456891195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=faafd72b-812f-4819-b8b0-2ff9886af137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.457526903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=faafd72b-812f-4819-b8b0-2ff9886af137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.515636999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2faa2856-4ba6-4394-936f-bb7a7cd26c11 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.515751350Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2faa2856-4ba6-4394-936f-bb7a7cd26c11 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.519314926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b846b1b-eb62-45a6-9da5-274ba63518cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.520137037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729779520089402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b846b1b-eb62-45a6-9da5-274ba63518cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.524337252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d5c0974-8db1-412f-aa92-48adbc565e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.524485700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d5c0974-8db1-412f-aa92-48adbc565e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.524973802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d5c0974-8db1-412f-aa92-48adbc565e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.575676461Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8eb70e44-22ce-4d25-be09-95efdfcf9485 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.577171886Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4mwk4,Uid:a1f7a988-c439-426d-87ef-876b33660835,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729626253884213,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:51:38.108510779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-349588,Uid:ac0440d4fa2ea903fff52b0464521eb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722729607939060456,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{kubernetes.io/config.hash: ac0440d4fa2ea903fff52b0464521eb3,kubernetes.io/config.seen: 2024-08-03T23:59:46.435216918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzmtg,Uid:8ac3c975-02c6-485b-9cfa-d754718d255e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592587128618,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-03T23:49:24.012837526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-349588,Uid:b38e6e3a481edaeb2d39d5c31b3f5139,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592580886583,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b38e6e3a481edaeb2d39d5c31b3f5139,kubernetes.io/config.seen: 2024-08-03T23:48:55.009442537Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-349588,Uid:d136dc55379aa8ec52be70f4c3d00d85,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722729592571707986,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.168:8443,kubernetes.io/config.hash: d136dc55379aa8ec52be70f4c3d00d85,kubernetes.io/config.seen: 2024-08-03T23:48:55.009441231Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&PodSandboxMetadata{Name:etcd-ha-349588,Uid:2dba9e755d68dc45e521e88de3636318,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592497037507,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
dba9e755d68dc45e521e88de3636318,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: 2dba9e755d68dc45e521e88de3636318,kubernetes.io/config.seen: 2024-08-03T23:48:55.009437487Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&PodSandboxMetadata{Name:kube-proxy-bbzdt,Uid:5f4d564f-843e-4284-a9fa-792241d9ba26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592493568062,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.612471107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSa
ndbox{Id:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&PodSandboxMetadata{Name:kindnet-2q4kc,Uid:720b92aa-c5c9-4664-a163-7c94fd5b3a4d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592478739720,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.636903650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-349588,Uid:9284bf34376b00a4b9834ebca6fce13d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592465104135,Labels:map[string]string{component: kube-scheduler,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9284bf34376b00a4b9834ebca6fce13d,kubernetes.io/config.seen: 2024-08-03T23:48:55.009443670Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e5eb5e5c-5ffb-4036-8a22-ed2204813520,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592445123619,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\
":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-03T23:49:24.012170306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8qt6,Uid:ab1ff267-f331-4404-8610-50fb0680a2c5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729586569915498,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.002914243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4mwk4,Uid:a1f7a988-c439-426d-87ef-876b33660835,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722729098431236137,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:51:38.108510779Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzmtg,Uid:8ac3c975-02c6-485b-9cfa-d754718d255e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728964324310136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.012837526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8qt6,Uid:ab1ff267-f331-4404-8610-50fb0680a2c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728964310124564,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.002914243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&PodSandboxMetadata{Name:kindnet-2q4kc,Uid:720b92aa-c5c9-4664-a163-7c94fd5b3a4d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728948569174068,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.636903650Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&PodSandboxMetadata{Name:kube-proxy-bbzdt,Uid:5f4d564f-843e-4284-a9fa-792241d9ba26,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728948532115781,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.612471107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&PodSandboxMetadata{Name:etcd-ha-349588,Uid:2dba9e755d68dc45e521e88de3636318,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728928635716125,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: 2dba9e755d68dc45e521e88de3636318,kubernetes.io/config.seen: 2024-08-03T23:48:48.162037830Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-349588,Uid:9284bf34376b00a4b9834ebca6fce13d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728928627120876,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9284bf34
376b00a4b9834ebca6fce13d,kubernetes.io/config.seen: 2024-08-03T23:48:48.162035930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8eb70e44-22ce-4d25-be09-95efdfcf9485 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.578543578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51f12438-694d-4c2d-ba80-fd0de3ce08a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.578656435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51f12438-694d-4c2d-ba80-fd0de3ce08a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.579266792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51f12438-694d-4c2d-ba80-fd0de3ce08a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.583225007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3952c7c2-49a2-46fc-a908-9e97e566dc76 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.583315780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3952c7c2-49a2-46fc-a908-9e97e566dc76 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.584989336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a44cddc8-956e-435d-90f9-3d10caa590fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.585715312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729779585682816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a44cddc8-956e-435d-90f9-3d10caa590fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.586300412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbd64725-fb70-439d-b37c-efe70b064774 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.586443102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbd64725-fb70-439d-b37c-efe70b064774 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:02:59 ha-349588 crio[3774]: time="2024-08-04 00:02:59.586969480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbd64725-fb70-439d-b37c-efe70b064774 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6ffc34fbcff48       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   3                   8ebd5a9b3db7d       kube-controller-manager-ha-349588
	7cadffb6aead6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       4                   1c7a17c509d88       storage-provisioner
	7a61b46762dd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   f525d29de8aa7       kube-apiserver-ha-349588
	6a73ecf6dbb19       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   41ca0335f69b2       busybox-fc5497c4f-4mwk4
	addbe1f2c028f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   2                   8ebd5a9b3db7d       kube-controller-manager-ha-349588
	4d85a224c630f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   32aa658666fbf       kube-vip-ha-349588
	df0aa4c6b57f7       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago        Running             kindnet-cni               1                   890187f2c4ef7       kindnet-2q4kc
	328f4b4dd498a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago        Running             kube-proxy                1                   44e869a47a7ea       kube-proxy-bbzdt
	d05bb2bd8cb21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   c99083e857777       coredns-7db6d8ff4d-fzmtg
	f426f08e475fc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago        Exited              kube-apiserver            2                   f525d29de8aa7       kube-apiserver-ha-349588
	e5ef9908c0d49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   e7080d7770ee6       etcd-ha-349588
	58b3d40254301       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       3                   1c7a17c509d88       storage-provisioner
	5871d70c1b93e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      3 minutes ago        Running             kube-scheduler            1                   075cb4b116ff1       kube-scheduler-ha-349588
	7ec508b116836       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   28473da86b0ed       coredns-7db6d8ff4d-z8qt6
	c6fd002f59b0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   a2e2fb00f6b54       busybox-fc5497c4f-4mwk4
	c780810d93e46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   37f34e1fe1b85       coredns-7db6d8ff4d-fzmtg
	81817890a62a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   925c168e44d83       coredns-7db6d8ff4d-z8qt6
	8706b763ebe33       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   d2e5e2b102cd4       kindnet-2q4kc
	1f48d6d5328f8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   842c0109e8643       kube-proxy-bbzdt
	9bd785365c881       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   69dc19cc2bbff       etcd-ha-349588
	f061678087351       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   16e8a700bcd71       kube-scheduler-ha-349588
	
	
	==> coredns [7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1663534453]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:59:59.108) (total time: 10001ms):
	Trace[1663534453]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:00:09.110)
	Trace[1663534453]: [10.001534042s] [10.001534042s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[84425102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:00.324) (total time: 10001ms):
	Trace[84425102]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:00:10.325)
	Trace[84425102]: [10.001691633s] [10.001691633s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[395855082]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:02.660) (total time: 10001ms):
	Trace[395855082]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:00:12.661)
	Trace[395855082]: [10.001108225s] [10.001108225s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87] <==
	[INFO] 10.244.2.2:56181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186686s
	[INFO] 10.244.2.2:56701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166229s
	[INFO] 10.244.2.2:38728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109023s
	[INFO] 10.244.2.2:45155 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001333912s
	[INFO] 10.244.2.2:51605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083342s
	[INFO] 10.244.1.2:38219 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015823s
	[INFO] 10.244.1.2:52488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178675s
	[INFO] 10.244.1.2:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097525s
	[INFO] 10.244.0.4:55438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074628s
	[INFO] 10.244.2.2:36883 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010754s
	[INFO] 10.244.2.2:53841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090252s
	[INFO] 10.244.2.2:59602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092585s
	[INFO] 10.244.1.2:59266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147793s
	[INFO] 10.244.1.2:44530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122943s
	[INFO] 10.244.0.4:42192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097553s
	[INFO] 10.244.2.2:40701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172686s
	[INFO] 10.244.2.2:38338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166475s
	[INFO] 10.244.2.2:58001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140105s
	[INFO] 10.244.2.2:51129 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105337s
	[INFO] 10.244.1.2:44130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106258s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d] <==
	[INFO] 10.244.2.2:39556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137234s
	[INFO] 10.244.2.2:60582 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141615s
	[INFO] 10.244.2.2:36052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074574s
	[INFO] 10.244.1.2:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019702s
	[INFO] 10.244.1.2:39746 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827365s
	[INFO] 10.244.1.2:47114 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078787s
	[INFO] 10.244.1.2:38856 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198841s
	[INFO] 10.244.1.2:49149 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428046s
	[INFO] 10.244.0.4:47461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104433s
	[INFO] 10.244.0.4:47790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083369s
	[INFO] 10.244.0.4:39525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161056s
	[INFO] 10.244.2.2:58034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169362s
	[INFO] 10.244.1.2:44282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187567s
	[INFO] 10.244.1.2:48438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016257s
	[INFO] 10.244.0.4:52544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142962s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152657s
	[INFO] 10.244.0.4:45953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009439s
	[INFO] 10.244.1.2:57136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.1.2:58739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139508s
	[INFO] 10.244.1.2:50023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125422s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: Trace[894765974]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:04.857) (total time: 13123ms):
	Trace[894765974]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer 13122ms (00:00:17.980)
	Trace[894765974]: [13.123032832s] [13.123032832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-349588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:49:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-349588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72ab11669b434797a5e41b5352f74be2
	  System UUID:                72ab1166-9b43-4797-a5e4-1b5352f74be2
	  Boot ID:                    e1637c60-2dbe-4ea9-949e-0f2b10f03d1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4mwk4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-fzmtg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-z8qt6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-349588                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-2q4kc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-349588             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-349588    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bbzdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-349588             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-349588                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m24s  kube-proxy       
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-349588 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-349588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-349588 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-349588 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Warning  ContainerGCFailed        4m5s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m17s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           98s    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           32s    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	
	
	Name:               ha-349588-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:49:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-349588-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8919c8bcbd284472a3c4b5b3ae885051
	  System UUID:                8919c8bc-bd28-4472-a3c4-b5b3ae885051
	  Boot ID:                    8eb080af-fc2f-4c39-b585-2131e1411e0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-szvhv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-349588-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-zqhp6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-349588-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-349588-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-gbg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-349588-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-349588-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  NodeNotReady             9m24s                  node-controller  Node ha-349588-m02 status is now: NodeNotReady
	  Normal  Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m50s (x8 over 2m50s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x8 over 2m50s)  kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x7 over 2m50s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	
	
	Name:               ha-349588-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_51_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:02:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:02:29 +0000   Sun, 04 Aug 2024 00:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:02:29 +0000   Sun, 04 Aug 2024 00:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:02:29 +0000   Sun, 04 Aug 2024 00:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:02:29 +0000   Sun, 04 Aug 2024 00:01:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    ha-349588-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 43f3523f989d4c49bec19f93fe176e08
	  System UUID:                43f3523f-989d-4c49-bec1-9f93fe176e08
	  Boot ID:                    02949723-9693-4a0e-89e2-108dd224f174
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mlkx9                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-349588-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7sr59                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-349588-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-349588-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-gxhmd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-349588-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-349588-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-349588-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal   RegisteredNode           2m17s              node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	  Normal   NodeNotReady             96s                node-controller  Node ha-349588-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-349588-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-349588-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-349588-m03 has been rebooted, boot id: 02949723-9693-4a0e-89e2-108dd224f174
	  Normal   NodeReady                62s                kubelet          Node ha-349588-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-349588-m03 event: Registered Node ha-349588-m03 in Controller
	
	
	Name:               ha-349588-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_52_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:52:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:02:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:02:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:02:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:02:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-349588-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac9326af96243febea155e979b68343
	  System UUID:                4ac9326a-f962-43fe-bea1-55e979b68343
	  Boot ID:                    b54adeae-269f-4ac2-b146-653f6749ff54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7rfzm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2sdf6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-349588-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m17s              node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeNotReady             96s                node-controller  Node ha-349588-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-349588-m04 has been rebooted, boot id: b54adeae-269f-4ac2-b146-653f6749ff54
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-349588-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-349588-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063697] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.170133] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.139803] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.274186] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.334862] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.414847] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.686183] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.066614] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.504623] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[Aug 3 23:49] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.728228] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.925424] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 23:56] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 3 23:59] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.140812] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +0.199670] systemd-fstab-generator[3720]: Ignoring "noauto" option for root device
	[  +0.148018] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +0.296705] systemd-fstab-generator[3760]: Ignoring "noauto" option for root device
	[  +1.274744] systemd-fstab-generator[3861]: Ignoring "noauto" option for root device
	[  +5.977070] kauditd_printk_skb: 132 callbacks suppressed
	[Aug 4 00:00] kauditd_printk_skb: 76 callbacks suppressed
	[ +27.139653] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.696223] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70] <==
	2024/08/03 23:58:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:58:12.966852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"862.344946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-03T23:58:12.966866Z","caller":"traceutil/trace.go:171","msg":"trace[581641849] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"862.369417ms","start":"2024-08-03T23:58:12.104492Z","end":"2024-08-03T23:58:12.966861Z","steps":["trace[581641849] 'agreement among raft nodes before linearized reading'  (duration: 862.344537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T23:58:12.966878Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T23:58:12.104468Z","time spent":"862.406072ms","remote":"127.0.0.1:36450","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	2024/08/03 23:58:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:58:13.109423Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:58:13.109557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:58:13.10972Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-03T23:58:13.109931Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.109969Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.109995Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.11007Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110174Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110236Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110249Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110255Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110269Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110306Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110466Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110511Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110552Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.114108Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-03T23:58:13.114254Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-03T23:58:13.11428Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-349588","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> etcd [e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a] <==
	{"level":"warn","ts":"2024-08-04T00:01:53.573828Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:01:56.27173Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.79:2380/version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:01:56.271862Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:01:58.573029Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:01:58.574193Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:00.274079Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.79:2380/version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:00.274162Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:03.573897Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:03.575025Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:04.276902Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.79:2380/version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:04.276982Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:08.279718Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.79:2380/version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:08.279799Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f702a198aad1bc13","error":"Get \"https://192.168.39.79:2380/version\": dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:08.574563Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:08.575655Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-04T00:02:11.674867Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:02:11.674946Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:02:11.685861Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e34fba8f5739efe8","to":"f702a198aad1bc13","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-04T00:02:11.68594Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:02:11.692294Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e34fba8f5739efe8","to":"f702a198aad1bc13","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-04T00:02:11.692413Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:02:11.762771Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.79:41650","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-04T00:02:11.782833Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:02:13.575444Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:13.576714Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:03:00 up 14 min,  0 users,  load average: 0.39, 0.54, 0.35
	Linux ha-349588 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a] <==
	I0803 23:57:43.549775       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:57:43.550863       1 main.go:299] handling current node
	I0803 23:57:43.550923       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:57:43.551146       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:57:43.551311       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:57:43.551395       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:57:43.551528       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:57:43.551559       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:57:53.545391       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:57:53.545450       1 main.go:299] handling current node
	I0803 23:57:53.545471       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:57:53.545494       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:57:53.545664       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:57:53.545670       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:57:53.545729       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:57:53.545752       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:58:03.543010       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:58:03.543060       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:58:03.543272       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:58:03.543299       1 main.go:299] handling current node
	I0803 23:58:03.543322       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:58:03.543343       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:58:03.543481       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:58:03.543504       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	E0803 23:58:11.962914       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26] <==
	I0804 00:02:24.491428       1 main.go:299] handling current node
	I0804 00:02:34.485939       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:02:34.486053       1 main.go:299] handling current node
	I0804 00:02:34.486083       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:02:34.486096       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:02:34.486324       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0804 00:02:34.486469       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0804 00:02:34.486726       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:02:34.486764       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:02:44.491047       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:02:44.491128       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:02:44.491299       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:02:44.491310       1 main.go:299] handling current node
	I0804 00:02:44.491323       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:02:44.491328       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:02:44.491545       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0804 00:02:44.491576       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0804 00:02:54.482099       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0804 00:02:54.482148       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0804 00:02:54.482295       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:02:54.482320       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:02:54.482482       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:02:54.482511       1 main.go:299] handling current node
	I0804 00:02:54.482522       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:02:54.482527       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558] <==
	I0804 00:00:37.694327       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0804 00:00:37.695200       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0804 00:00:37.670765       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0804 00:00:37.795630       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:00:37.808733       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:00:37.808779       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:00:37.808786       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:00:37.808793       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:00:37.811794       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:00:37.813444       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:00:37.813474       1 policy_source.go:224] refreshing policies
	I0804 00:00:37.813892       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:00:37.868495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:00:37.869492       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:00:37.870749       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:00:37.871273       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:00:37.871307       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:00:37.871927       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:00:37.878576       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0804 00:00:37.888810       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.67 192.168.39.79]
	I0804 00:00:37.891044       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:00:37.905166       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0804 00:00:37.913704       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0804 00:00:38.676473       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 00:00:39.240888       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168 192.168.39.67]
	
	
	==> kube-apiserver [f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f] <==
	I0803 23:59:53.742303       1 options.go:221] external host was not specified, using 192.168.39.168
	I0803 23:59:53.745863       1 server.go:148] Version: v1.30.3
	I0803 23:59:53.746534       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:59:54.471945       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0803 23:59:54.484851       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:59:54.488273       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0803 23:59:54.488316       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0803 23:59:54.488976       1 instance.go:299] Using reconciler: lease
	W0804 00:00:14.471053       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0804 00:00:14.471178       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0804 00:00:14.490464       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d] <==
	I0804 00:01:22.888507       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:01:22.888724       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0804 00:01:22.893016       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:01:22.901549       1 shared_informer.go:320] Caches are synced for taint
	I0804 00:01:22.901928       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0804 00:01:22.902341       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m02"
	I0804 00:01:22.902830       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m04"
	I0804 00:01:22.902974       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588"
	I0804 00:01:22.902406       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-349588-m03"
	I0804 00:01:22.903843       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 00:01:22.997461       1 shared_informer.go:320] Caches are synced for daemon sets
	I0804 00:01:23.027812       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:23.053896       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:01:23.070541       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:01:23.078751       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:01:23.082662       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 00:01:23.508997       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:23.566233       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:01:23.566596       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:01:24.432780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.991746ms"
	I0804 00:01:24.433134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.094µs"
	I0804 00:01:59.810822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="166.753µs"
	I0804 00:02:17.393878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.376028ms"
	I0804 00:02:17.394051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.718µs"
	I0804 00:02:51.282860       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-349588-m04"
	
	
	==> kube-controller-manager [addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e] <==
	I0804 00:00:26.596912       1 serving.go:380] Generated self-signed cert in-memory
	I0804 00:00:26.842257       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0804 00:00:26.842340       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:26.844103       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:26.844259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0804 00:00:26.844313       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:26.844599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0804 00:00:37.722180       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511] <==
	E0803 23:57:03.483892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:06.685563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:06.685637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:09.757745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:09.757961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:09.759601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:09.759686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:12.829011       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:12.829149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.045137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.045419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.045649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.045759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.046113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.046220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:37.403870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:37.403971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:43.551957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:43.552075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:49.693841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:49.693985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:58:08.125471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:58:08.125598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:58:11.195928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:58:11.196061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53] <==
	I0803 23:59:54.729598       1 server_linux.go:69] "Using iptables proxy"
	E0803 23:59:55.646053       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:59:58.716215       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:01.788550       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:07.933249       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:17.148892       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:35.582107       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0804 00:00:35.582219       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0804 00:00:35.632677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:00:35.632801       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:00:35.632828       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:00:35.636138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:00:35.636550       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:00:35.636599       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:35.638430       1 config.go:192] "Starting service config controller"
	I0804 00:00:35.638480       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:00:35.638512       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:00:35.638536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:00:35.639233       1 config.go:319] "Starting node config controller"
	I0804 00:00:35.639267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:00:37.440877       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:00:37.441075       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:00:37.443014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc] <==
	W0804 00:00:32.214625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.168:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.214752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.168:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.633212       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.168:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.633296       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.168:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.702704       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.168:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.702809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.168:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.740556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.168:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.740643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.168:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.935901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.168:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.935969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.168:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.964858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.168:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.964932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.168:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.260851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.168:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.260984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.168:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.351043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.168:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.351228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.168:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.446203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.168:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.446339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.168:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:37.711889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 00:00:37.712337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 00:00:37.712779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:00:37.712894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 00:00:37.713014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:00:37.713101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0804 00:00:57.202851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802] <==
	W0803 23:58:09.600961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 23:58:09.601001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0803 23:58:09.718084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:58:09.718242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:58:09.879521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:58:09.879701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:58:09.993051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:58:09.993103       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:58:10.469187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:58:10.469286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:58:10.636027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:58:10.636077       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:58:10.716843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:58:10.716900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:58:11.078253       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:58:11.078304       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:58:11.131750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:58:11.131798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:58:11.708122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:58:11.708180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:58:11.736684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:58:11.736730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:58:12.576485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:58:12.576571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:58:12.941217       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 00:00:45 ha-349588 kubelet[1373]: I0804 00:00:45.670179    1373 scope.go:117] "RemoveContainer" containerID="addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e"
	Aug 04 00:00:45 ha-349588 kubelet[1373]: E0804 00:00:45.670745    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-349588_kube-system(b38e6e3a481edaeb2d39d5c31b3f5139)\"" pod="kube-system/kube-controller-manager-ha-349588" podUID="b38e6e3a481edaeb2d39d5c31b3f5139"
	Aug 04 00:00:47 ha-349588 kubelet[1373]: I0804 00:00:47.079288    1373 scope.go:117] "RemoveContainer" containerID="58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4"
	Aug 04 00:00:55 ha-349588 kubelet[1373]: E0804 00:00:55.163079    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:00:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:00:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:00:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:00:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:00:57 ha-349588 kubelet[1373]: I0804 00:00:57.080711    1373 scope.go:117] "RemoveContainer" containerID="addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e"
	Aug 04 00:00:57 ha-349588 kubelet[1373]: E0804 00:00:57.081841    1373 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-349588_kube-system(b38e6e3a481edaeb2d39d5c31b3f5139)\"" pod="kube-system/kube-controller-manager-ha-349588" podUID="b38e6e3a481edaeb2d39d5c31b3f5139"
	Aug 04 00:01:11 ha-349588 kubelet[1373]: I0804 00:01:11.078533    1373 scope.go:117] "RemoveContainer" containerID="addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e"
	Aug 04 00:01:13 ha-349588 kubelet[1373]: I0804 00:01:13.326476    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-4mwk4" podStartSLOduration=574.347075815 podStartE2EDuration="9m35.326448023s" podCreationTimestamp="2024-08-03 23:51:38 +0000 UTC" firstStartedPulling="2024-08-03 23:51:38.66567803 +0000 UTC m=+163.780373453" lastFinishedPulling="2024-08-03 23:51:39.645050226 +0000 UTC m=+164.759745661" observedRunningTime="2024-08-03 23:51:39.812711683 +0000 UTC m=+164.927407128" watchObservedRunningTime="2024-08-04 00:01:13.326448023 +0000 UTC m=+738.441143457"
	Aug 04 00:01:27 ha-349588 kubelet[1373]: I0804 00:01:27.080598    1373 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-349588" podUID="b3a4c252-ee5e-4b2f-b982-a09904a9c547"
	Aug 04 00:01:27 ha-349588 kubelet[1373]: I0804 00:01:27.108158    1373 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-349588"
	Aug 04 00:01:28 ha-349588 kubelet[1373]: I0804 00:01:28.010691    1373 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-349588" podUID="b3a4c252-ee5e-4b2f-b982-a09904a9c547"
	Aug 04 00:01:55 ha-349588 kubelet[1373]: E0804 00:01:55.141260    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:01:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:02:55 ha-349588 kubelet[1373]: E0804 00:02:55.141464    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:02:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:02:59.062496  353901 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-323890/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-349588 -n ha-349588
helpers_test.go:261: (dbg) Run:  kubectl --context ha-349588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (411.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 stop -v=7 --alsologtostderr: exit status 82 (2m0.52075303s)

                                                
                                                
-- stdout --
	* Stopping node "ha-349588-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:03:19.279348  354310 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:03:19.279631  354310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:19.279641  354310 out.go:304] Setting ErrFile to fd 2...
	I0804 00:03:19.279646  354310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:03:19.279883  354310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:03:19.280150  354310 out.go:298] Setting JSON to false
	I0804 00:03:19.280228  354310 mustload.go:65] Loading cluster: ha-349588
	I0804 00:03:19.280589  354310 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:19.280684  354310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0804 00:03:19.280861  354310 mustload.go:65] Loading cluster: ha-349588
	I0804 00:03:19.280992  354310 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:03:19.281034  354310 stop.go:39] StopHost: ha-349588-m04
	I0804 00:03:19.281428  354310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:03:19.281486  354310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:03:19.297720  354310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0804 00:03:19.298296  354310 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:03:19.299065  354310 main.go:141] libmachine: Using API Version  1
	I0804 00:03:19.299103  354310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:03:19.299460  354310 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:03:19.301845  354310 out.go:177] * Stopping node "ha-349588-m04"  ...
	I0804 00:03:19.302890  354310 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 00:03:19.302949  354310 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0804 00:03:19.303284  354310 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 00:03:19.303311  354310 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0804 00:03:19.306354  354310 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:03:19.306746  354310 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 01:02:45 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0804 00:03:19.306769  354310 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:03:19.307091  354310 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0804 00:03:19.307330  354310 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0804 00:03:19.307507  354310 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0804 00:03:19.307692  354310 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	I0804 00:03:19.397011  354310 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 00:03:19.451759  354310 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 00:03:19.506964  354310 main.go:141] libmachine: Stopping "ha-349588-m04"...
	I0804 00:03:19.507007  354310 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0804 00:03:19.508564  354310 main.go:141] libmachine: (ha-349588-m04) Calling .Stop
	I0804 00:03:19.512562  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 0/120
	I0804 00:03:20.514040  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 1/120
	I0804 00:03:21.515326  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 2/120
	I0804 00:03:22.516685  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 3/120
	I0804 00:03:23.518143  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 4/120
	I0804 00:03:24.520186  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 5/120
	I0804 00:03:25.522677  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 6/120
	I0804 00:03:26.525152  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 7/120
	I0804 00:03:27.526663  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 8/120
	I0804 00:03:28.528175  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 9/120
	I0804 00:03:29.530492  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 10/120
	I0804 00:03:30.532189  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 11/120
	I0804 00:03:31.533927  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 12/120
	I0804 00:03:32.536074  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 13/120
	I0804 00:03:33.537500  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 14/120
	I0804 00:03:34.539549  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 15/120
	I0804 00:03:35.540986  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 16/120
	I0804 00:03:36.542351  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 17/120
	I0804 00:03:37.544032  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 18/120
	I0804 00:03:38.546293  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 19/120
	I0804 00:03:39.548001  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 20/120
	I0804 00:03:40.549591  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 21/120
	I0804 00:03:41.551101  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 22/120
	I0804 00:03:42.552690  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 23/120
	I0804 00:03:43.554136  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 24/120
	I0804 00:03:44.556159  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 25/120
	I0804 00:03:45.557707  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 26/120
	I0804 00:03:46.560012  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 27/120
	I0804 00:03:47.561890  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 28/120
	I0804 00:03:48.564051  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 29/120
	I0804 00:03:49.566240  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 30/120
	I0804 00:03:50.567659  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 31/120
	I0804 00:03:51.569314  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 32/120
	I0804 00:03:52.570751  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 33/120
	I0804 00:03:53.572044  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 34/120
	I0804 00:03:54.573382  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 35/120
	I0804 00:03:55.575118  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 36/120
	I0804 00:03:56.576440  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 37/120
	I0804 00:03:57.578057  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 38/120
	I0804 00:03:58.580030  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 39/120
	I0804 00:03:59.582171  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 40/120
	I0804 00:04:00.584128  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 41/120
	I0804 00:04:01.585892  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 42/120
	I0804 00:04:02.588126  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 43/120
	I0804 00:04:03.590641  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 44/120
	I0804 00:04:04.592893  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 45/120
	I0804 00:04:05.594512  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 46/120
	I0804 00:04:06.595980  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 47/120
	I0804 00:04:07.597540  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 48/120
	I0804 00:04:08.599043  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 49/120
	I0804 00:04:09.601587  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 50/120
	I0804 00:04:10.603237  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 51/120
	I0804 00:04:11.604672  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 52/120
	I0804 00:04:12.606748  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 53/120
	I0804 00:04:13.608128  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 54/120
	I0804 00:04:14.610365  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 55/120
	I0804 00:04:15.612815  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 56/120
	I0804 00:04:16.614458  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 57/120
	I0804 00:04:17.615999  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 58/120
	I0804 00:04:18.617448  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 59/120
	I0804 00:04:19.619406  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 60/120
	I0804 00:04:20.621637  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 61/120
	I0804 00:04:21.623216  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 62/120
	I0804 00:04:22.624723  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 63/120
	I0804 00:04:23.626373  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 64/120
	I0804 00:04:24.628165  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 65/120
	I0804 00:04:25.630038  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 66/120
	I0804 00:04:26.632008  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 67/120
	I0804 00:04:27.634692  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 68/120
	I0804 00:04:28.636152  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 69/120
	I0804 00:04:29.638283  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 70/120
	I0804 00:04:30.640136  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 71/120
	I0804 00:04:31.642110  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 72/120
	I0804 00:04:32.643861  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 73/120
	I0804 00:04:33.646340  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 74/120
	I0804 00:04:34.648059  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 75/120
	I0804 00:04:35.649473  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 76/120
	I0804 00:04:36.651583  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 77/120
	I0804 00:04:37.653726  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 78/120
	I0804 00:04:38.655411  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 79/120
	I0804 00:04:39.657498  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 80/120
	I0804 00:04:40.659031  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 81/120
	I0804 00:04:41.660533  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 82/120
	I0804 00:04:42.661879  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 83/120
	I0804 00:04:43.664098  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 84/120
	I0804 00:04:44.666456  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 85/120
	I0804 00:04:45.669138  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 86/120
	I0804 00:04:46.670639  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 87/120
	I0804 00:04:47.672051  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 88/120
	I0804 00:04:48.674448  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 89/120
	I0804 00:04:49.676567  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 90/120
	I0804 00:04:50.678058  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 91/120
	I0804 00:04:51.679294  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 92/120
	I0804 00:04:52.680938  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 93/120
	I0804 00:04:53.682433  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 94/120
	I0804 00:04:54.684681  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 95/120
	I0804 00:04:55.686352  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 96/120
	I0804 00:04:56.687734  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 97/120
	I0804 00:04:57.689237  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 98/120
	I0804 00:04:58.691001  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 99/120
	I0804 00:04:59.693315  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 100/120
	I0804 00:05:00.694787  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 101/120
	I0804 00:05:01.696143  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 102/120
	I0804 00:05:02.697437  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 103/120
	I0804 00:05:03.698712  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 104/120
	I0804 00:05:04.699981  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 105/120
	I0804 00:05:05.701555  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 106/120
	I0804 00:05:06.703026  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 107/120
	I0804 00:05:07.704398  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 108/120
	I0804 00:05:08.706347  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 109/120
	I0804 00:05:09.708476  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 110/120
	I0804 00:05:10.709913  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 111/120
	I0804 00:05:11.712192  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 112/120
	I0804 00:05:12.714524  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 113/120
	I0804 00:05:13.716257  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 114/120
	I0804 00:05:14.718270  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 115/120
	I0804 00:05:15.720370  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 116/120
	I0804 00:05:16.722915  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 117/120
	I0804 00:05:17.724439  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 118/120
	I0804 00:05:18.726087  354310 main.go:141] libmachine: (ha-349588-m04) Waiting for machine to stop 119/120
	I0804 00:05:19.726708  354310 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 00:05:19.726807  354310 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 00:05:19.728805  354310 out.go:177] 
	W0804 00:05:19.730266  354310 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 00:05:19.730289  354310 out.go:239] * 
	* 
	W0804 00:05:19.738225  354310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:05:19.740114  354310 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-349588 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr: exit status 3 (18.958552643s)

                                                
                                                
-- stdout --
	ha-349588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-349588-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:05:19.793860  354729 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:05:19.794197  354729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:05:19.794209  354729 out.go:304] Setting ErrFile to fd 2...
	I0804 00:05:19.794216  354729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:05:19.794413  354729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:05:19.794636  354729 out.go:298] Setting JSON to false
	I0804 00:05:19.794676  354729 mustload.go:65] Loading cluster: ha-349588
	I0804 00:05:19.794811  354729 notify.go:220] Checking for updates...
	I0804 00:05:19.795132  354729 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:05:19.795153  354729 status.go:255] checking status of ha-349588 ...
	I0804 00:05:19.795600  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:19.795692  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:19.817634  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I0804 00:05:19.818171  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:19.818953  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:19.818988  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:19.819378  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:19.819617  354729 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0804 00:05:19.821479  354729 status.go:330] ha-349588 host status = "Running" (err=<nil>)
	I0804 00:05:19.821496  354729 host.go:66] Checking if "ha-349588" exists ...
	I0804 00:05:19.821841  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:19.821879  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:19.838817  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0804 00:05:19.839458  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:19.840216  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:19.840270  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:19.840731  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:19.840952  354729 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0804 00:05:19.844209  354729 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0804 00:05:19.844864  354729 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0804 00:05:19.844902  354729 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0804 00:05:19.845100  354729 host.go:66] Checking if "ha-349588" exists ...
	I0804 00:05:19.845459  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:19.845531  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:19.861903  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0804 00:05:19.862332  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:19.862867  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:19.862900  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:19.863280  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:19.863491  354729 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0804 00:05:19.863758  354729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:05:19.863784  354729 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0804 00:05:19.867025  354729 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0804 00:05:19.867473  354729 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0804 00:05:19.867498  354729 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0804 00:05:19.867657  354729 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0804 00:05:19.867848  354729 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0804 00:05:19.868037  354729 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0804 00:05:19.868211  354729 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0804 00:05:19.951116  354729 ssh_runner.go:195] Run: systemctl --version
	I0804 00:05:19.959164  354729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:05:19.977870  354729 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0804 00:05:19.977914  354729 api_server.go:166] Checking apiserver status ...
	I0804 00:05:19.978005  354729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:05:20.005765  354729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5041/cgroup
	W0804 00:05:20.016534  354729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5041/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:05:20.016606  354729 ssh_runner.go:195] Run: ls
	I0804 00:05:20.022494  354729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:05:20.031413  354729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:05:20.031443  354729 status.go:422] ha-349588 apiserver status = Running (err=<nil>)
	I0804 00:05:20.031453  354729 status.go:257] ha-349588 status: &{Name:ha-349588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:05:20.031470  354729 status.go:255] checking status of ha-349588-m02 ...
	I0804 00:05:20.031795  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.031831  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.047516  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0804 00:05:20.047996  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.048500  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.048524  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.048871  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.049040  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetState
	I0804 00:05:20.050866  354729 status.go:330] ha-349588-m02 host status = "Running" (err=<nil>)
	I0804 00:05:20.050899  354729 host.go:66] Checking if "ha-349588-m02" exists ...
	I0804 00:05:20.051345  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.051404  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.067259  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41125
	I0804 00:05:20.067843  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.068425  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.068456  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.068860  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.069067  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetIP
	I0804 00:05:20.072543  354729 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0804 00:05:20.073098  354729 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:59:58 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0804 00:05:20.073145  354729 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0804 00:05:20.073523  354729 host.go:66] Checking if "ha-349588-m02" exists ...
	I0804 00:05:20.074000  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.074054  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.091236  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0804 00:05:20.091746  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.092264  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.092289  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.092616  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.092824  354729 main.go:141] libmachine: (ha-349588-m02) Calling .DriverName
	I0804 00:05:20.093034  354729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:05:20.093055  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHHostname
	I0804 00:05:20.096657  354729 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0804 00:05:20.097254  354729 main.go:141] libmachine: (ha-349588-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:a2:30", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:59:58 +0000 UTC Type:0 Mac:52:54:00:c5:a2:30 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-349588-m02 Clientid:01:52:54:00:c5:a2:30}
	I0804 00:05:20.097301  354729 main.go:141] libmachine: (ha-349588-m02) DBG | domain ha-349588-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:c5:a2:30 in network mk-ha-349588
	I0804 00:05:20.097491  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHPort
	I0804 00:05:20.097737  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHKeyPath
	I0804 00:05:20.097931  354729 main.go:141] libmachine: (ha-349588-m02) Calling .GetSSHUsername
	I0804 00:05:20.098090  354729 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m02/id_rsa Username:docker}
	I0804 00:05:20.189040  354729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:05:20.209656  354729 kubeconfig.go:125] found "ha-349588" server: "https://192.168.39.254:8443"
	I0804 00:05:20.209712  354729 api_server.go:166] Checking apiserver status ...
	I0804 00:05:20.209766  354729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:05:20.225691  354729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1587/cgroup
	W0804 00:05:20.237206  354729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1587/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:05:20.237271  354729 ssh_runner.go:195] Run: ls
	I0804 00:05:20.243102  354729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 00:05:20.248899  354729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 00:05:20.248940  354729 status.go:422] ha-349588-m02 apiserver status = Running (err=<nil>)
	I0804 00:05:20.248951  354729 status.go:257] ha-349588-m02 status: &{Name:ha-349588-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:05:20.248969  354729 status.go:255] checking status of ha-349588-m04 ...
	I0804 00:05:20.249368  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.249418  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.266595  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0804 00:05:20.267237  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.267813  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.267843  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.268234  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.268465  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetState
	I0804 00:05:20.270275  354729 status.go:330] ha-349588-m04 host status = "Running" (err=<nil>)
	I0804 00:05:20.270295  354729 host.go:66] Checking if "ha-349588-m04" exists ...
	I0804 00:05:20.270698  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.270747  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.286915  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44557
	I0804 00:05:20.287477  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.287986  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.288007  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.288404  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.288664  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetIP
	I0804 00:05:20.291855  354729 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:05:20.292345  354729 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 01:02:45 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0804 00:05:20.292389  354729 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:05:20.292494  354729 host.go:66] Checking if "ha-349588-m04" exists ...
	I0804 00:05:20.292884  354729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:05:20.292938  354729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:05:20.308844  354729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0804 00:05:20.309314  354729 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:05:20.309944  354729 main.go:141] libmachine: Using API Version  1
	I0804 00:05:20.309977  354729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:05:20.310347  354729 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:05:20.310578  354729 main.go:141] libmachine: (ha-349588-m04) Calling .DriverName
	I0804 00:05:20.310831  354729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:05:20.310854  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHHostname
	I0804 00:05:20.314366  354729 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:05:20.314853  354729 main.go:141] libmachine: (ha-349588-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:3e:82", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 01:02:45 +0000 UTC Type:0 Mac:52:54:00:15:3e:82 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:ha-349588-m04 Clientid:01:52:54:00:15:3e:82}
	I0804 00:05:20.314881  354729 main.go:141] libmachine: (ha-349588-m04) DBG | domain ha-349588-m04 has defined IP address 192.168.39.61 and MAC address 52:54:00:15:3e:82 in network mk-ha-349588
	I0804 00:05:20.315034  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHPort
	I0804 00:05:20.315275  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHKeyPath
	I0804 00:05:20.315473  354729 main.go:141] libmachine: (ha-349588-m04) Calling .GetSSHUsername
	I0804 00:05:20.315641  354729 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588-m04/id_rsa Username:docker}
	W0804 00:05:38.701788  354729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.61:22: connect: no route to host
	W0804 00:05:38.701937  354729 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	E0804 00:05:38.701959  354729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host
	I0804 00:05:38.701968  354729 status.go:257] ha-349588-m04 status: &{Name:ha-349588-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0804 00:05:38.701987  354729 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.61:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-349588 -n ha-349588
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-349588 logs -n 25: (1.853335602s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m04 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp testdata/cp-test.txt                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588:/home/docker/cp-test_ha-349588-m04_ha-349588.txt                       |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588 sudo cat                                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588.txt                                 |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m02:/home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m02 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt                              | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m03:/home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n                                                                 | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | ha-349588-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-349588 ssh -n ha-349588-m03 sudo cat                                          | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC | 03 Aug 24 23:52 UTC |
	|         | /home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-349588 node stop m02 -v=7                                                     | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-349588 node start m02 -v=7                                                    | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-349588 -v=7                                                           | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-349588 -v=7                                                                | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:56 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-349588 --wait=true -v=7                                                    | ha-349588 | jenkins | v1.33.1 | 03 Aug 24 23:58 UTC | 04 Aug 24 00:02 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-349588                                                                | ha-349588 | jenkins | v1.33.1 | 04 Aug 24 00:02 UTC |                     |
	| node    | ha-349588 node delete m03 -v=7                                                   | ha-349588 | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC | 04 Aug 24 00:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-349588 stop -v=7                                                              | ha-349588 | jenkins | v1.33.1 | 04 Aug 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:58:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:58:12.022178  352373 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:58:12.022292  352373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:12.022299  352373 out.go:304] Setting ErrFile to fd 2...
	I0803 23:58:12.022303  352373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:58:12.022473  352373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:58:12.023127  352373 out.go:298] Setting JSON to false
	I0803 23:58:12.024133  352373 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31240,"bootTime":1722698252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:58:12.024204  352373 start.go:139] virtualization: kvm guest
	I0803 23:58:12.027664  352373 out.go:177] * [ha-349588] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:58:12.029202  352373 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:58:12.029203  352373 notify.go:220] Checking for updates...
	I0803 23:58:12.031644  352373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:58:12.032899  352373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:58:12.034081  352373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:58:12.035387  352373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:58:12.036532  352373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:58:12.038258  352373 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:12.038407  352373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:58:12.039001  352373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:58:12.039073  352373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:58:12.054915  352373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0803 23:58:12.055503  352373 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:58:12.056016  352373 main.go:141] libmachine: Using API Version  1
	I0803 23:58:12.056041  352373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:58:12.056413  352373 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:58:12.056606  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.094583  352373 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:58:12.095757  352373 start.go:297] selected driver: kvm2
	I0803 23:58:12.095770  352373 start.go:901] validating driver "kvm2" against &{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:58:12.095967  352373 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:58:12.096297  352373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:12.096362  352373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:58:12.112600  352373 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:58:12.113337  352373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:58:12.113371  352373 cni.go:84] Creating CNI manager for ""
	I0803 23:58:12.113385  352373 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:58:12.113443  352373 start.go:340] cluster config:
	{Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:58:12.113601  352373 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:58:12.116189  352373 out.go:177] * Starting "ha-349588" primary control-plane node in "ha-349588" cluster
	I0803 23:58:12.117315  352373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:58:12.117352  352373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:58:12.117367  352373 cache.go:56] Caching tarball of preloaded images
	I0803 23:58:12.117466  352373 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0803 23:58:12.117480  352373 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:58:12.117627  352373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/config.json ...
	I0803 23:58:12.117832  352373 start.go:360] acquireMachinesLock for ha-349588: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0803 23:58:12.117881  352373 start.go:364] duration metric: took 26.78µs to acquireMachinesLock for "ha-349588"
	I0803 23:58:12.117897  352373 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:58:12.117907  352373 fix.go:54] fixHost starting: 
	I0803 23:58:12.118177  352373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:58:12.118211  352373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:58:12.133364  352373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0803 23:58:12.133900  352373 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:58:12.134523  352373 main.go:141] libmachine: Using API Version  1
	I0803 23:58:12.134541  352373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:58:12.134872  352373 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:58:12.135109  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.135396  352373 main.go:141] libmachine: (ha-349588) Calling .GetState
	I0803 23:58:12.137082  352373 fix.go:112] recreateIfNeeded on ha-349588: state=Running err=<nil>
	W0803 23:58:12.137102  352373 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:58:12.139101  352373 out.go:177] * Updating the running kvm2 "ha-349588" VM ...
	I0803 23:58:12.140417  352373 machine.go:94] provisionDockerMachine start ...
	I0803 23:58:12.140450  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:58:12.140695  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.143880  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.144404  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.144430  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.144592  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.144790  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.144988  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.145126  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.145298  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.145499  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.145526  352373 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:58:12.247449  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:58:12.247493  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.247877  352373 buildroot.go:166] provisioning hostname "ha-349588"
	I0803 23:58:12.247907  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.248085  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.251168  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.251668  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.251694  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.251903  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.252086  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.252217  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.252417  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.252584  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.252797  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.252811  352373 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-349588 && echo "ha-349588" | sudo tee /etc/hostname
	I0803 23:58:12.368965  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-349588
	
	I0803 23:58:12.369011  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.371903  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.372370  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.372399  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.372524  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.372725  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.372889  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.373012  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.373171  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.373378  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.373401  352373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-349588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-349588/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-349588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:58:12.474819  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:58:12.474856  352373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0803 23:58:12.474919  352373 buildroot.go:174] setting up certificates
	I0803 23:58:12.474936  352373 provision.go:84] configureAuth start
	I0803 23:58:12.474972  352373 main.go:141] libmachine: (ha-349588) Calling .GetMachineName
	I0803 23:58:12.475296  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:58:12.478143  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.478575  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.478596  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.478767  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.481238  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.481583  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.481614  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.481806  352373 provision.go:143] copyHostCerts
	I0803 23:58:12.481846  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:58:12.481897  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0803 23:58:12.481911  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0803 23:58:12.481997  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0803 23:58:12.482102  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:58:12.482138  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0803 23:58:12.482148  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0803 23:58:12.482189  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0803 23:58:12.482266  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:58:12.482289  352373 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0803 23:58:12.482296  352373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0803 23:58:12.482361  352373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0803 23:58:12.482435  352373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.ha-349588 san=[127.0.0.1 192.168.39.168 ha-349588 localhost minikube]
	I0803 23:58:12.658472  352373 provision.go:177] copyRemoteCerts
	I0803 23:58:12.658569  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:58:12.658606  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.661190  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.661587  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.661615  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.661776  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.661956  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.662094  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.662208  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:58:12.745563  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0803 23:58:12.745670  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0803 23:58:12.775983  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0803 23:58:12.776066  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:58:12.802827  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0803 23:58:12.802895  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:58:12.832413  352373 provision.go:87] duration metric: took 357.447101ms to configureAuth
	I0803 23:58:12.832443  352373 buildroot.go:189] setting minikube options for container-runtime
	I0803 23:58:12.832675  352373 config.go:182] Loaded profile config "ha-349588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:58:12.832765  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:58:12.835659  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.836019  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:58:12.836046  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:58:12.836266  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:58:12.836492  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.836671  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:58:12.836856  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:58:12.837022  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:58:12.837209  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:58:12.837229  352373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0803 23:59:43.753336  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0803 23:59:43.753367  352373 machine.go:97] duration metric: took 1m31.612933562s to provisionDockerMachine
	I0803 23:59:43.753388  352373 start.go:293] postStartSetup for "ha-349588" (driver="kvm2")
	I0803 23:59:43.753405  352373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:59:43.753423  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:43.753965  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:59:43.754005  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.757345  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.757858  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.757889  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.758100  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.758317  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.758476  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.758636  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:43.838432  352373 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:59:43.842968  352373 info.go:137] Remote host: Buildroot 2023.02.9
	I0803 23:59:43.842999  352373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0803 23:59:43.843063  352373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0803 23:59:43.843144  352373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0803 23:59:43.843155  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0803 23:59:43.843243  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:59:43.853725  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:59:43.877762  352373 start.go:296] duration metric: took 124.357505ms for postStartSetup
	I0803 23:59:43.877813  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:43.878152  352373 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0803 23:59:43.878184  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.880917  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.881418  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.881445  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.881723  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.881952  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.882117  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.882244  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	W0803 23:59:43.960493  352373 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0803 23:59:43.960526  352373 fix.go:56] duration metric: took 1m31.84261873s for fixHost
	I0803 23:59:43.960555  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:43.963234  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.963603  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:43.963626  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:43.963804  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:43.964018  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.964149  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:43.964270  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:43.964427  352373 main.go:141] libmachine: Using SSH client type: native
	I0803 23:59:43.964597  352373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0803 23:59:43.964608  352373 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0803 23:59:44.062478  352373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722729584.031748852
	
	I0803 23:59:44.062506  352373 fix.go:216] guest clock: 1722729584.031748852
	I0803 23:59:44.062515  352373 fix.go:229] Guest: 2024-08-03 23:59:44.031748852 +0000 UTC Remote: 2024-08-03 23:59:43.960535295 +0000 UTC m=+91.982163429 (delta=71.213557ms)
	I0803 23:59:44.062577  352373 fix.go:200] guest clock delta is within tolerance: 71.213557ms
	I0803 23:59:44.062590  352373 start.go:83] releasing machines lock for "ha-349588", held for 1m31.944698887s
	I0803 23:59:44.062620  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.062932  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:59:44.065706  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.066164  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.066191  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.066341  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.066931  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.067128  352373 main.go:141] libmachine: (ha-349588) Calling .DriverName
	I0803 23:59:44.067235  352373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:59:44.067276  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:44.067318  352373 ssh_runner.go:195] Run: cat /version.json
	I0803 23:59:44.067339  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHHostname
	I0803 23:59:44.069940  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070217  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070396  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.070422  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070606  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:44.070605  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:44.070664  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:44.070774  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:44.070829  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHPort
	I0803 23:59:44.070935  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:44.071008  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHKeyPath
	I0803 23:59:44.071088  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:44.071133  352373 main.go:141] libmachine: (ha-349588) Calling .GetSSHUsername
	I0803 23:59:44.071268  352373 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/ha-349588/id_rsa Username:docker}
	I0803 23:59:44.146572  352373 ssh_runner.go:195] Run: systemctl --version
	I0803 23:59:44.171937  352373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0803 23:59:44.336519  352373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0803 23:59:44.346120  352373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0803 23:59:44.346208  352373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:59:44.356254  352373 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:59:44.356282  352373 start.go:495] detecting cgroup driver to use...
	I0803 23:59:44.356346  352373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0803 23:59:44.373082  352373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0803 23:59:44.387351  352373 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:59:44.387417  352373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:59:44.401478  352373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:59:44.415579  352373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:59:44.571115  352373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:59:44.723860  352373 docker.go:233] disabling docker service ...
	I0803 23:59:44.723939  352373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:59:44.742115  352373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:59:44.756814  352373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:59:44.908053  352373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:59:45.059689  352373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:59:45.074602  352373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:59:45.095728  352373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0803 23:59:45.095814  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.106977  352373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0803 23:59:45.107044  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.117790  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.128930  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.140024  352373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:59:45.151462  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.162651  352373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.174932  352373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0803 23:59:45.185902  352373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:59:45.196192  352373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:59:45.206390  352373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:45.353161  352373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0803 23:59:46.102412  352373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0803 23:59:46.102487  352373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0803 23:59:46.107496  352373 start.go:563] Will wait 60s for crictl version
	I0803 23:59:46.107568  352373 ssh_runner.go:195] Run: which crictl
	I0803 23:59:46.111637  352373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:59:46.155875  352373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0803 23:59:46.155975  352373 ssh_runner.go:195] Run: crio --version
	I0803 23:59:46.188694  352373 ssh_runner.go:195] Run: crio --version
	I0803 23:59:46.222413  352373 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0803 23:59:46.223676  352373 main.go:141] libmachine: (ha-349588) Calling .GetIP
	I0803 23:59:46.226489  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:46.226844  352373 main.go:141] libmachine: (ha-349588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:f9:50", ip: ""} in network mk-ha-349588: {Iface:virbr1 ExpiryTime:2024-08-04 00:48:24 +0000 UTC Type:0 Mac:52:54:00:d9:f9:50 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:ha-349588 Clientid:01:52:54:00:d9:f9:50}
	I0803 23:59:46.226873  352373 main.go:141] libmachine: (ha-349588) DBG | domain ha-349588 has defined IP address 192.168.39.168 and MAC address 52:54:00:d9:f9:50 in network mk-ha-349588
	I0803 23:59:46.227137  352373 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0803 23:59:46.232353  352373 kubeadm.go:883] updating cluster {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:59:46.232528  352373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:59:46.232587  352373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:46.278029  352373 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:59:46.278054  352373 crio.go:433] Images already preloaded, skipping extraction
	I0803 23:59:46.278124  352373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:59:46.312783  352373 crio.go:514] all images are preloaded for cri-o runtime.
	I0803 23:59:46.312808  352373 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:59:46.312819  352373 kubeadm.go:934] updating node { 192.168.39.168 8443 v1.30.3 crio true true} ...
	I0803 23:59:46.312950  352373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-349588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:59:46.313044  352373 ssh_runner.go:195] Run: crio config
	I0803 23:59:46.360725  352373 cni.go:84] Creating CNI manager for ""
	I0803 23:59:46.360752  352373 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0803 23:59:46.360763  352373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:59:46.360784  352373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-349588 NodeName:ha-349588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:59:46.360952  352373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-349588"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:59:46.360992  352373 kube-vip.go:115] generating kube-vip config ...
	I0803 23:59:46.361037  352373 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0803 23:59:46.373460  352373 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0803 23:59:46.373582  352373 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0803 23:59:46.373636  352373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 23:59:46.383568  352373 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:59:46.383652  352373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0803 23:59:46.393526  352373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0803 23:59:46.411264  352373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:59:46.428022  352373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0803 23:59:46.445354  352373 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0803 23:59:46.466565  352373 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0803 23:59:46.471098  352373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:59:46.658268  352373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:59:46.700997  352373 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588 for IP: 192.168.39.168
	I0803 23:59:46.701023  352373 certs.go:194] generating shared ca certs ...
	I0803 23:59:46.701046  352373 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:46.701245  352373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0803 23:59:46.701286  352373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0803 23:59:46.701297  352373 certs.go:256] generating profile certs ...
	I0803 23:59:46.701384  352373 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/client.key
	I0803 23:59:46.701412  352373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e
	I0803 23:59:46.701427  352373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168 192.168.39.67 192.168.39.79 192.168.39.254]
	I0803 23:59:47.006363  352373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e ...
	I0803 23:59:47.006400  352373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e: {Name:mk9f786535d9505912931877f662c0753dc060a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:47.006607  352373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e ...
	I0803 23:59:47.006623  352373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e: {Name:mk53d572a9fcd78e381f03b68adb9818446cf961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:59:47.006734  352373 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt.169b7e6e -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt
	I0803 23:59:47.006908  352373 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key.169b7e6e -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key
	I0803 23:59:47.007047  352373 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key
	I0803 23:59:47.007065  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0803 23:59:47.007078  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0803 23:59:47.007095  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0803 23:59:47.007108  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0803 23:59:47.007120  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0803 23:59:47.007139  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0803 23:59:47.007151  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0803 23:59:47.007165  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0803 23:59:47.007216  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0803 23:59:47.007250  352373 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0803 23:59:47.007260  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:59:47.007283  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:59:47.007310  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:59:47.007332  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0803 23:59:47.007368  352373 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0803 23:59:47.007398  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.007412  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.007424  352373 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.008052  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:59:47.034098  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0803 23:59:47.058909  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:59:47.085685  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:59:47.112370  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0803 23:59:47.138782  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:59:47.169235  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:59:47.200844  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/ha-349588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:59:47.230235  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:59:47.255963  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0803 23:59:47.281423  352373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0803 23:59:47.306580  352373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:59:47.325420  352373 ssh_runner.go:195] Run: openssl version
	I0803 23:59:47.331786  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0803 23:59:47.342995  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.347801  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.347868  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0803 23:59:47.353623  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0803 23:59:47.363634  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0803 23:59:47.375915  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.380634  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.380695  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0803 23:59:47.386810  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:59:47.396622  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:59:47.407872  352373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.412589  352373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.412670  352373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:59:47.418613  352373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:59:47.428554  352373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:59:47.433703  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:59:47.439732  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:59:47.445548  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:59:47.451215  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:59:47.457185  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:59:47.463120  352373 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:59:47.469073  352373 kubeadm.go:392] StartCluster: {Name:ha-349588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-349588 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.79 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.61 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:59:47.469198  352373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0803 23:59:47.469269  352373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:59:47.509392  352373 cri.go:89] found id: "7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b"
	I0803 23:59:47.509416  352373 cri.go:89] found id: "a686a7f580b893d079fe4bd4ed0bae85431691790cbec289aaa1977f360c9683"
	I0803 23:59:47.509421  352373 cri.go:89] found id: "2be249bcb71a100c9a2a9452201d928fe6cba9c61ba486bc247249ae3fc2c5c9"
	I0803 23:59:47.509426  352373 cri.go:89] found id: "fdeef773baa1e9761ab366e53254f76d1f1a91972bc400d6b218dbbd70218061"
	I0803 23:59:47.509430  352373 cri.go:89] found id: "b54ac9de6d3da1774b24b9e2ba6fc0b56ea3cf76f8e6076e59c82e252d3100ba"
	I0803 23:59:47.509434  352373 cri.go:89] found id: "c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d"
	I0803 23:59:47.509439  352373 cri.go:89] found id: "81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87"
	I0803 23:59:47.509443  352373 cri.go:89] found id: "8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a"
	I0803 23:59:47.509447  352373 cri.go:89] found id: "1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511"
	I0803 23:59:47.509455  352373 cri.go:89] found id: "4f4a81f925548f663ba6886e356f4dab3e9c5bb4b7593d9a059c653b2e42e440"
	I0803 23:59:47.509459  352373 cri.go:89] found id: "9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70"
	I0803 23:59:47.509468  352373 cri.go:89] found id: "f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802"
	I0803 23:59:47.509475  352373 cri.go:89] found id: "c7a32eac144452fae0664bc5434712e48a2b290dfe2be3dffcc0d329503e7c35"
	I0803 23:59:47.509479  352373 cri.go:89] found id: "1b3755f3d86ea7074de015bbc594c9b44955545b2b969412ddaa039635f049c2"
	I0803 23:59:47.509488  352373 cri.go:89] found id: ""
	I0803 23:59:47.509563  352373 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.433700659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69514b8f-4757-45bf-a89e-26c9bf36aa29 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.434855154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ec581d0-90d7-4e7b-8538-7088ebe6ffca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.435320424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729939435296831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ec581d0-90d7-4e7b-8538-7088ebe6ffca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.436230975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1f775d5-303b-4421-aeb6-894bc8429df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.436295263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1f775d5-303b-4421-aeb6-894bc8429df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.436756241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1f775d5-303b-4421-aeb6-894bc8429df1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.456989008Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6439aeda-8bbd-49fb-922c-43b79bd596ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.457909530Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4mwk4,Uid:a1f7a988-c439-426d-87ef-876b33660835,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729626253884213,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:51:38.108510779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-349588,Uid:ac0440d4fa2ea903fff52b0464521eb3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722729607939060456,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{kubernetes.io/config.hash: ac0440d4fa2ea903fff52b0464521eb3,kubernetes.io/config.seen: 2024-08-03T23:59:46.435216918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzmtg,Uid:8ac3c975-02c6-485b-9cfa-d754718d255e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592587128618,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-03T23:49:24.012837526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-349588,Uid:b38e6e3a481edaeb2d39d5c31b3f5139,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592580886583,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b38e6e3a481edaeb2d39d5c31b3f5139,kubernetes.io/config.seen: 2024-08-03T23:48:55.009442537Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-349588,Uid:d136dc55379aa8ec52be70f4c3d00d85,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1722729592571707986,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.168:8443,kubernetes.io/config.hash: d136dc55379aa8ec52be70f4c3d00d85,kubernetes.io/config.seen: 2024-08-03T23:48:55.009441231Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&PodSandboxMetadata{Name:etcd-ha-349588,Uid:2dba9e755d68dc45e521e88de3636318,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592497037507,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2
dba9e755d68dc45e521e88de3636318,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: 2dba9e755d68dc45e521e88de3636318,kubernetes.io/config.seen: 2024-08-03T23:48:55.009437487Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&PodSandboxMetadata{Name:kube-proxy-bbzdt,Uid:5f4d564f-843e-4284-a9fa-792241d9ba26,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592493568062,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.612471107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSa
ndbox{Id:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&PodSandboxMetadata{Name:kindnet-2q4kc,Uid:720b92aa-c5c9-4664-a163-7c94fd5b3a4d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592478739720,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.636903650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-349588,Uid:9284bf34376b00a4b9834ebca6fce13d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592465104135,Labels:map[string]string{component: kube-scheduler,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9284bf34376b00a4b9834ebca6fce13d,kubernetes.io/config.seen: 2024-08-03T23:48:55.009443670Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e5eb5e5c-5ffb-4036-8a22-ed2204813520,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729592445123619,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\
":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-03T23:49:24.012170306Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8qt6,Uid:ab1ff267-f331-4404-8610-50fb0680a2c5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722729586569915498,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.002914243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-4mwk4,Uid:a1f7a988-c439-426d-87ef-876b33660835,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722729098431236137,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:51:38.108510779Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fzmtg,Uid:8ac3c975-02c6-485b-9cfa-d754718d255e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728964324310136,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.012837526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8qt6,Uid:ab1ff267-f331-4404-8610-50fb0680a2c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728964310124564,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:24.002914243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&PodSandboxMetadata{Name:kindnet-2q4kc,Uid:720b92aa-c5c9-4664-a163-7c94fd5b3a4d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728948569174068,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.636903650Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&PodSandboxMetadata{Name:kube-proxy-bbzdt,Uid:5f4d564f-843e-4284-a9fa-792241d9ba26,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728948532115781,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-03T23:49:07.612471107Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&PodSandboxMetadata{Name:etcd-ha-349588,Uid:2dba9e755d68dc45e521e88de3636318,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728928635716125,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: 2dba9e755d68dc45e521e88de3636318,kubernetes.io/config.seen: 2024-08-03T23:48:48.162037830Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-349588,Uid:9284bf34376b00a4b9834ebca6fce13d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722728928627120876,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9284bf34
376b00a4b9834ebca6fce13d,kubernetes.io/config.seen: 2024-08-03T23:48:48.162035930Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6439aeda-8bbd-49fb-922c-43b79bd596ea name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.458902138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=973232ce-0af2-43df-9bad-7e99fb89ce64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.458986641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=973232ce-0af2-43df-9bad-7e99fb89ce64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.461407674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=973232ce-0af2-43df-9bad-7e99fb89ce64 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.489583230Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3625b48-8930-47e4-a75c-5c40ffb2d726 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.489682566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3625b48-8930-47e4-a75c-5c40ffb2d726 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.490883145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42f9d1fa-fdad-487e-ab94-f5e770801a83 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.491569348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729939491539901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42f9d1fa-fdad-487e-ab94-f5e770801a83 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.492209800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68a0402c-9ae8-4551-bcce-bc6b5c598f52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.492338647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68a0402c-9ae8-4551-bcce-bc6b5c598f52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.495253172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68a0402c-9ae8-4551-bcce-bc6b5c598f52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.543939441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0b35279-5b6b-4e61-ac07-1430e7f90f5f name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.544033953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0b35279-5b6b-4e61-ac07-1430e7f90f5f name=/runtime.v1.RuntimeService/Version
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.545737913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0408be85-122f-4d80-b1d3-520d2a8e3e20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.546200798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722729939546176499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0408be85-122f-4d80-b1d3-520d2a8e3e20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.546876185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5b1916c-9414-44d0-9e85-641700929fa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.546938062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5b1916c-9414-44d0-9e85-641700929fa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:05:39 ha-349588 crio[3774]: time="2024-08-04 00:05:39.547684583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722729671109928746,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cadffb6aead607c8de8e66fb57f96a33c8dcbb226a2e9d907328e39dc313774,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722729647094949887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722729635092057632,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a73ecf6dbb193c9cdef6fa9e2af0d25ebbc54a702595d98c3fcc53e7c8b5769,PodSandboxId:41ca0335f69b2dc186a1b384a849e18823b4872e7c91ddeb75e80672ce12d848,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722729626433153554,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annotations:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e,PodSandboxId:8ebd5a9b3db7d742b527d17a5340dfc37680ae104bb023a733c952701acf7a07,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722729625685921250,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b38e6e3a481edaeb2d39d5c31b3f5139,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d85a224c630f448338028de5d64a59d2a6a54dfc93362930dc2ef6fbfd754c7,PodSandboxId:32aa658666fbf77f9f305ead94f8a4601158b4648ed6e0c480ecba7e33fedfaa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722729608045013349,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0440d4fa2ea903fff52b0464521eb3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53,PodSandboxId:44e869a47a7ea4704327cbfb099530a1445d678ec6734cfdf65bbfcd2e05917d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722729593256624998,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26,PodSandboxId:890187f2c4ef7c71ae6a0a589842abc0b104aca65452f05582f4fc22bf3fce90,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722729593290485576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a,PodSandboxId:c99083e85777788c08012bcd536533be20ec2f88fffd6a11551fc3e30b472b7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729593251461659,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kubernetes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58b3d402543012aed419fb22e5282deeb62dec07faf125b06f57d5e8ea170cd4,PodSandboxId:1c7a17c509d8863620b277381ac8614521a3f39ebf502c7f2df5f77aa9fb58cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722729592861972088,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5eb5e5c-5ffb-4036-8a22-ed2204813520,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce326c5,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f,PodSandboxId:f525d29de8aa7cbaeacb9eb55b13327a0bf8f832d90ee3690a7e9fecd168b6e1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722729593004902778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d136dc55379aa8ec52be70f4c3d00d85,},Annotations:map[string]string{io.kubernetes.container.hash: 511000c3,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a,PodSandboxId:e7080d7770ee6566ee35b083f64422b6830518f30473eae2bdd9c393641d93c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722729592913614787,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc,PodSandboxId:075cb4b116ff1f0b097865950f0a231345d18a7b5cd9bc790b11a7927aa4bad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722729592792260547,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b,PodSandboxId:28473da86b0ed6670e5a5f2947f654f6ad8592d0010564dbd7c7783a20172e80,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722729586724650906,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6fd002f59b0d4b372ccf9149ab1e00fad694c7a458f6506cd77b56350249948,PodSandboxId:a2e2fb00f6b54b54bf8a74ed71b0ecd454708de3f9624f4d23d2363c215330e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722729099665335596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-4mwk4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a1f7a988-c439-426d-87ef-876b33660835,},Annota
tions:map[string]string{io.kubernetes.container.hash: eb796819,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d,PodSandboxId:37f34e1fe1b859e9f5d3379817e4cbc127e6acdf600f6094bc0b67d09bee4c0e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964593083011,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fzmtg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac3c975-02c6-485b-9cfa-d754718d255e,},Annotations:map[string]string{io.kuber
netes.container.hash: 935d2143,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87,PodSandboxId:925c168e44d8368067f9cbcf5f862c0c4395f449e87fe6a3c0c3c38bd49c314e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722728964520270959,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8qt6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1ff267-f331-4404-8610-50fb0680a2c5,},Annotations:map[string]string{io.kubernetes.container.hash: 56518eb1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a,PodSandboxId:d2e5e2b102cd4b0c096afe93e00e844915505440075fe6bb8c1c436c256434fc,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722728952381332378,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2q4kc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 720b92aa-c5c9-4664-a163-7c94fd5b3a4d,},Annotations:map[string]string{io.kubernetes.container.hash: 94032640,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511,PodSandboxId:842c0109e8643daa97c5101ca1589149cf2777099bad04496a65218225a22ff2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722728948804766128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bbzdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f4d564f-843e-4284-a9fa-792241d9ba26,},Annotations:map[string]string{io.kubernetes.container.hash: 486aaf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70,PodSandboxId:69dc19cc2bbff6b7aba3310ff31c0768920897fa47460362705b09bb7c58150a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722728928937667769,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dba9e755d68dc45e521e88de3636318,},Annotations:map[string]string{io.kubernetes.container.hash: 37cdfbf8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802,PodSandboxId:16e8a700bcd71c6d318a3f80519188037f652c3828ab0377ee9a191d06bf9bdd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1722728928879173032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-349588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9284bf34376b00a4b9834ebca6fce13d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5b1916c-9414-44d0-9e85-641700929fa9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6ffc34fbcff48       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   3                   8ebd5a9b3db7d       kube-controller-manager-ha-349588
	7cadffb6aead6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   1c7a17c509d88       storage-provisioner
	7a61b46762dd2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   f525d29de8aa7       kube-apiserver-ha-349588
	6a73ecf6dbb19       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   41ca0335f69b2       busybox-fc5497c4f-4mwk4
	addbe1f2c028f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   2                   8ebd5a9b3db7d       kube-controller-manager-ha-349588
	4d85a224c630f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   32aa658666fbf       kube-vip-ha-349588
	df0aa4c6b57f7       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   890187f2c4ef7       kindnet-2q4kc
	328f4b4dd498a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   44e869a47a7ea       kube-proxy-bbzdt
	d05bb2bd8cb21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c99083e857777       coredns-7db6d8ff4d-fzmtg
	f426f08e475fc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   f525d29de8aa7       kube-apiserver-ha-349588
	e5ef9908c0d49       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   e7080d7770ee6       etcd-ha-349588
	58b3d40254301       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   1c7a17c509d88       storage-provisioner
	5871d70c1b93e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   075cb4b116ff1       kube-scheduler-ha-349588
	7ec508b116836       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   28473da86b0ed       coredns-7db6d8ff4d-z8qt6
	c6fd002f59b0d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a2e2fb00f6b54       busybox-fc5497c4f-4mwk4
	c780810d93e46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   37f34e1fe1b85       coredns-7db6d8ff4d-fzmtg
	81817890a62a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   925c168e44d83       coredns-7db6d8ff4d-z8qt6
	8706b763ebe33       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   d2e5e2b102cd4       kindnet-2q4kc
	1f48d6d5328f8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   842c0109e8643       kube-proxy-bbzdt
	9bd785365c881       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   69dc19cc2bbff       etcd-ha-349588
	f061678087351       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   16e8a700bcd71       kube-scheduler-ha-349588
	
	
	==> coredns [7ec508b116836e96218b5cf402f184dabb787529229d9285f7e3de98c647b70b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1663534453]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (03-Aug-2024 23:59:59.108) (total time: 10001ms):
	Trace[1663534453]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:00:09.110)
	Trace[1663534453]: [10.001534042s] [10.001534042s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[84425102]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:00.324) (total time: 10001ms):
	Trace[84425102]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:00:10.325)
	Trace[84425102]: [10.001691633s] [10.001691633s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[395855082]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:02.660) (total time: 10001ms):
	Trace[395855082]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:00:12.661)
	Trace[395855082]: [10.001108225s] [10.001108225s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [81817890a62a623e594610c06a3bf3881efbf9fbdde34bfce2ba770d70f66b87] <==
	[INFO] 10.244.2.2:56181 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186686s
	[INFO] 10.244.2.2:56701 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166229s
	[INFO] 10.244.2.2:38728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109023s
	[INFO] 10.244.2.2:45155 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001333912s
	[INFO] 10.244.2.2:51605 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083342s
	[INFO] 10.244.1.2:38219 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015823s
	[INFO] 10.244.1.2:52488 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178675s
	[INFO] 10.244.1.2:40148 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097525s
	[INFO] 10.244.0.4:55438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074628s
	[INFO] 10.244.2.2:36883 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010754s
	[INFO] 10.244.2.2:53841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090252s
	[INFO] 10.244.2.2:59602 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092585s
	[INFO] 10.244.1.2:59266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147793s
	[INFO] 10.244.1.2:44530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122943s
	[INFO] 10.244.0.4:42192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097553s
	[INFO] 10.244.2.2:40701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172686s
	[INFO] 10.244.2.2:38338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166475s
	[INFO] 10.244.2.2:58001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000140105s
	[INFO] 10.244.2.2:51129 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105337s
	[INFO] 10.244.1.2:44130 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106258s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c780810d93e460c61906d4b52fb9b22ab441535b1cf319cbbb1630a5baae7c4d] <==
	[INFO] 10.244.2.2:39556 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137234s
	[INFO] 10.244.2.2:60582 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141615s
	[INFO] 10.244.2.2:36052 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074574s
	[INFO] 10.244.1.2:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019702s
	[INFO] 10.244.1.2:39746 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001827365s
	[INFO] 10.244.1.2:47114 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078787s
	[INFO] 10.244.1.2:38856 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198841s
	[INFO] 10.244.1.2:49149 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001428046s
	[INFO] 10.244.0.4:47461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104433s
	[INFO] 10.244.0.4:47790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083369s
	[INFO] 10.244.0.4:39525 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161056s
	[INFO] 10.244.2.2:58034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169362s
	[INFO] 10.244.1.2:44282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187567s
	[INFO] 10.244.1.2:48438 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016257s
	[INFO] 10.244.0.4:52544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142962s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152657s
	[INFO] 10.244.0.4:45953 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009439s
	[INFO] 10.244.1.2:57136 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160618s
	[INFO] 10.244.1.2:58739 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139508s
	[INFO] 10.244.1.2:50023 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125422s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=17, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d05bb2bd8cb21f7431d9ef428af22ceab481ed74c055b608273a2c13a6aaf03a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: Trace[894765974]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 00:00:04.857) (total time: 13123ms):
	Trace[894765974]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer 13122ms (00:00:17.980)
	Trace[894765974]: [13.123032832s] [13.123032832s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36116->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-349588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:48:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:05:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:00:37 +0000   Sat, 03 Aug 2024 23:49:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    ha-349588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72ab11669b434797a5e41b5352f74be2
	  System UUID:                72ab1166-9b43-4797-a5e4-1b5352f74be2
	  Boot ID:                    e1637c60-2dbe-4ea9-949e-0f2b10f03d1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4mwk4              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-fzmtg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-z8qt6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-349588                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-2q4kc                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-349588             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-349588    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-bbzdt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-349588             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-349588                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m4s   kube-proxy       
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-349588 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-349588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-349588 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-349588 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           14m    node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Warning  ContainerGCFailed        6m44s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m56s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           4m17s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	  Normal   RegisteredNode           3m11s  node-controller  Node ha-349588 event: Registered Node ha-349588 in Controller
	
	
	Name:               ha-349588-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_50_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:49:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:05:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:01:18 +0000   Sun, 04 Aug 2024 00:00:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    ha-349588-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8919c8bcbd284472a3c4b5b3ae885051
	  System UUID:                8919c8bc-bd28-4472-a3c4-b5b3ae885051
	  Boot ID:                    8eb080af-fc2f-4c39-b585-2131e1411e0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-szvhv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-349588-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-zqhp6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-349588-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-349588-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-gbg5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-349588-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-349588-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m44s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-349588-m02 status is now: NodeNotReady
	  Normal  Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-349588-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-349588-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-349588-m02 event: Registered Node ha-349588-m02 in Controller
	
	
	Name:               ha-349588-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-349588-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=ha-349588
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_03T23_52_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:52:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-349588-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:03:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:03:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:03:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:03:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 00:02:51 +0000   Sun, 04 Aug 2024 00:03:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    ha-349588-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ac9326af96243febea155e979b68343
	  System UUID:                4ac9326a-f962-43fe-bea1-55e979b68343
	  Boot ID:                    b54adeae-269f-4ac2-b146-653f6749ff54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-64855    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-7rfzm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-2sdf6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-349588-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m57s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   NodeNotReady             4m16s                  node-controller  Node ha-349588-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-349588-m04 event: Registered Node ha-349588-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m49s (x2 over 2m49s)  kubelet          Node ha-349588-m04 has been rebooted, boot id: b54adeae-269f-4ac2-b146-653f6749ff54
	  Normal   NodeHasSufficientMemory  2m49s (x3 over 2m49s)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x3 over 2m49s)  kubelet          Node ha-349588-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x3 over 2m49s)  kubelet          Node ha-349588-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m49s                  kubelet          Node ha-349588-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m49s                  kubelet          Node ha-349588-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-349588-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061103] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063697] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.170133] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.139803] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.274186] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +4.334862] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.066522] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.414847] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.686183] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.066614] kauditd_printk_skb: 51 callbacks suppressed
	[  +0.504623] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[Aug 3 23:49] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.728228] kauditd_printk_skb: 38 callbacks suppressed
	[  +9.925424] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 3 23:56] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 3 23:59] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +0.140812] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +0.199670] systemd-fstab-generator[3720]: Ignoring "noauto" option for root device
	[  +0.148018] systemd-fstab-generator[3732]: Ignoring "noauto" option for root device
	[  +0.296705] systemd-fstab-generator[3760]: Ignoring "noauto" option for root device
	[  +1.274744] systemd-fstab-generator[3861]: Ignoring "noauto" option for root device
	[  +5.977070] kauditd_printk_skb: 132 callbacks suppressed
	[Aug 4 00:00] kauditd_printk_skb: 76 callbacks suppressed
	[ +27.139653] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.696223] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [9bd785365c88199f59eae42baf2d03dc7006a496dc210aff08c006c49fb97f70] <==
	2024/08/03 23:58:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:58:12.966852Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"862.344946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-03T23:58:12.966866Z","caller":"traceutil/trace.go:171","msg":"trace[581641849] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"862.369417ms","start":"2024-08-03T23:58:12.104492Z","end":"2024-08-03T23:58:12.966861Z","steps":["trace[581641849] 'agreement among raft nodes before linearized reading'  (duration: 862.344537ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-03T23:58:12.966878Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-03T23:58:12.104468Z","time spent":"862.406072ms","remote":"127.0.0.1:36450","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	2024/08/03 23:58:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-03T23:58:13.109423Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-03T23:58:13.109557Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.168:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-03T23:58:13.10972Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"e34fba8f5739efe8","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-03T23:58:13.109931Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.109969Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.109995Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.11007Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110174Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110236Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110249Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af266204c5a37bea"}
	{"level":"info","ts":"2024-08-03T23:58:13.110255Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110269Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110306Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110466Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110511Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.110552Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-03T23:58:13.114108Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-03T23:58:13.114254Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.168:2380"}
	{"level":"info","ts":"2024-08-03T23:58:13.11428Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-349588","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.168:2380"],"advertise-client-urls":["https://192.168.39.168:2379"]}
	
	
	==> etcd [e5ef9908c0d491c604306db89408d9acb6ae585483060ee0a69478278822e01a] <==
	{"level":"info","ts":"2024-08-04T00:02:11.685861Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e34fba8f5739efe8","to":"f702a198aad1bc13","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-04T00:02:11.68594Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:02:11.692294Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"e34fba8f5739efe8","to":"f702a198aad1bc13","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-04T00:02:11.692413Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:02:11.762771Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.79:41650","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-04T00:02:11.782833Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:02:13.575444Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T00:02:13.576714Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f702a198aad1bc13","rtt":"0s","error":"dial tcp 192.168.39.79:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-04T00:03:05.473504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e34fba8f5739efe8 switched to configuration voters=(12620882778387610602 16379515494576287720)"}
	{"level":"info","ts":"2024-08-04T00:03:05.475849Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f729467791c9db0d","local-member-id":"e34fba8f5739efe8","removed-remote-peer-id":"f702a198aad1bc13","removed-remote-peer-urls":["https://192.168.39.79:2380"]}
	{"level":"info","ts":"2024-08-04T00:03:05.475976Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.476277Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:03:05.476336Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.47689Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:03:05.476947Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:03:05.477047Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.477275Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13","error":"context canceled"}
	{"level":"warn","ts":"2024-08-04T00:03:05.47761Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f702a198aad1bc13","error":"failed to read f702a198aad1bc13 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-04T00:03:05.477716Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.478104Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13","error":"context canceled"}
	{"level":"info","ts":"2024-08-04T00:03:05.478278Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"e34fba8f5739efe8","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:03:05.478326Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f702a198aad1bc13"}
	{"level":"info","ts":"2024-08-04T00:03:05.478589Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"e34fba8f5739efe8","removed-remote-peer-id":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.49277Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e34fba8f5739efe8","remote-peer-id-stream-handler":"e34fba8f5739efe8","remote-peer-id-from":"f702a198aad1bc13"}
	{"level":"warn","ts":"2024-08-04T00:03:05.50204Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"e34fba8f5739efe8","remote-peer-id-stream-handler":"e34fba8f5739efe8","remote-peer-id-from":"f702a198aad1bc13"}
	
	
	==> kernel <==
	 00:05:40 up 17 min,  0 users,  load average: 0.07, 0.35, 0.31
	Linux ha-349588 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [8706b763ebe33ac35526f59595133d913785087e57060b586760443295ca4c5a] <==
	I0803 23:57:43.549775       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:57:43.550863       1 main.go:299] handling current node
	I0803 23:57:43.550923       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:57:43.551146       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:57:43.551311       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:57:43.551395       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:57:43.551528       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:57:43.551559       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:57:53.545391       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:57:53.545450       1 main.go:299] handling current node
	I0803 23:57:53.545471       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:57:53.545494       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:57:53.545664       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:57:53.545670       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	I0803 23:57:53.545729       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:57:53.545752       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:58:03.543010       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0803 23:58:03.543060       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0803 23:58:03.543272       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0803 23:58:03.543299       1 main.go:299] handling current node
	I0803 23:58:03.543322       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0803 23:58:03.543343       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0803 23:58:03.543481       1 main.go:295] Handling node with IPs: map[192.168.39.79:{}]
	I0803 23:58:03.543504       1 main.go:322] Node ha-349588-m03 has CIDR [10.244.2.0/24] 
	E0803 23:58:11.962914       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [df0aa4c6b57f76e9c7d06aadeb6fe5a530d40d40d588363465383e9ce5815b26] <==
	I0804 00:04:54.482790       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:05:04.486478       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:05:04.486797       1 main.go:299] handling current node
	I0804 00:05:04.486856       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:05:04.486880       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:05:04.487037       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:05:04.487059       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:05:14.488615       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:05:14.488662       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:05:14.488810       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:05:14.488818       1 main.go:299] handling current node
	I0804 00:05:14.488828       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:05:14.488832       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:05:24.487185       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:05:24.487468       1 main.go:299] handling current node
	I0804 00:05:24.488240       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:05:24.488283       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:05:24.488508       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:05:24.488534       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	I0804 00:05:34.484486       1 main.go:295] Handling node with IPs: map[192.168.39.168:{}]
	I0804 00:05:34.484566       1 main.go:299] handling current node
	I0804 00:05:34.484605       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0804 00:05:34.484611       1 main.go:322] Node ha-349588-m02 has CIDR [10.244.1.0/24] 
	I0804 00:05:34.484817       1 main.go:295] Handling node with IPs: map[192.168.39.61:{}]
	I0804 00:05:34.484829       1 main.go:322] Node ha-349588-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7a61b46762dd241b33fbef06a0a8881d7a7766e9c070c3221f2b155f9971f558] <==
	I0804 00:00:37.694327       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0804 00:00:37.695200       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0804 00:00:37.670765       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0804 00:00:37.795630       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:00:37.808733       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:00:37.808779       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:00:37.808786       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:00:37.808793       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:00:37.811794       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:00:37.813444       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:00:37.813474       1 policy_source.go:224] refreshing policies
	I0804 00:00:37.813892       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:00:37.868495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:00:37.869492       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:00:37.870749       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:00:37.871273       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:00:37.871307       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:00:37.871927       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:00:37.878576       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0804 00:00:37.888810       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.67 192.168.39.79]
	I0804 00:00:37.891044       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:00:37.905166       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0804 00:00:37.913704       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0804 00:00:38.676473       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 00:00:39.240888       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.168 192.168.39.67]
	
	
	==> kube-apiserver [f426f08e475fcacfdba5108947556d0391f93e97bba5bb61606e7ee1abe41b4f] <==
	I0803 23:59:53.742303       1 options.go:221] external host was not specified, using 192.168.39.168
	I0803 23:59:53.745863       1 server.go:148] Version: v1.30.3
	I0803 23:59:53.746534       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 23:59:54.471945       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0803 23:59:54.484851       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0803 23:59:54.488273       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0803 23:59:54.488316       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0803 23:59:54.488976       1 instance.go:299] Using reconciler: lease
	W0804 00:00:14.471053       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0804 00:00:14.471178       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0804 00:00:14.490464       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6ffc34fbcff485ce3e2ed5a78489afb6ce07179caba58cb80ad336a0517d6d7d] <==
	E0804 00:03:42.897976       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:03:42.897987       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:03:42.897993       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:03:42.898000       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	I0804 00:03:53.036969       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.058853ms"
	I0804 00:03:53.037259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.731µs"
	E0804 00:04:02.899214       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:04:02.899306       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:04:02.899314       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:04:02.899319       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	E0804 00:04:02.899325       1 gc_controller.go:153] "Failed to get node" err="node \"ha-349588-m03\" not found" logger="pod-garbage-collector-controller" node="ha-349588-m03"
	I0804 00:04:02.913735       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7sr59"
	I0804 00:04:02.967759       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7sr59"
	I0804 00:04:02.967898       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-349588-m03"
	I0804 00:04:03.010946       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-349588-m03"
	I0804 00:04:03.010987       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gxhmd"
	I0804 00:04:03.043418       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-gxhmd"
	I0804 00:04:03.043555       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-349588-m03"
	I0804 00:04:03.080867       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-349588-m03"
	I0804 00:04:03.082439       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-349588-m03"
	I0804 00:04:03.118462       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-349588-m03"
	I0804 00:04:03.118584       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-349588-m03"
	I0804 00:04:03.151252       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-349588-m03"
	I0804 00:04:03.151295       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-349588-m03"
	I0804 00:04:03.183775       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-349588-m03"
	
	
	==> kube-controller-manager [addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e] <==
	I0804 00:00:26.596912       1 serving.go:380] Generated self-signed cert in-memory
	I0804 00:00:26.842257       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0804 00:00:26.842340       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:26.844103       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:00:26.844259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0804 00:00:26.844313       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 00:00:26.844599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0804 00:00:37.722180       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [1f48d6d5328f8dfebbd9d6d84dd159be04c0e7999297d48c9150cd26589bf511] <==
	E0803 23:57:03.483892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:06.685563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:06.685637       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:09.757745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:09.757961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:09.759601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:09.759686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:12.829011       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:12.829149       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.045137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.045419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.045649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.045759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:22.046113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:22.046220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:37.403870       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:37.403971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:43.551957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:43.552075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:57:49.693841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:57:49.693985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-349588&resourceVersion=1875": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:58:08.125471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:58:08.125598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1869": dial tcp 192.168.39.254:8443: connect: no route to host
	W0803 23:58:11.195928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	E0803 23:58:11.196061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1882": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [328f4b4dd498a763d37c7be50784ff053c299fd78b57c3ca2145bb0f97e69e53] <==
	I0803 23:59:54.729598       1 server_linux.go:69] "Using iptables proxy"
	E0803 23:59:55.646053       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0803 23:59:58.716215       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:01.788550       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:07.933249       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:17.148892       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 00:00:35.582107       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-349588\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0804 00:00:35.582219       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0804 00:00:35.632677       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:00:35.632801       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:00:35.632828       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:00:35.636138       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:00:35.636550       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:00:35.636599       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:00:35.638430       1 config.go:192] "Starting service config controller"
	I0804 00:00:35.638480       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:00:35.638512       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:00:35.638536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:00:35.639233       1 config.go:319] "Starting node config controller"
	I0804 00:00:35.639267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:00:37.440877       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:00:37.441075       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:00:37.443014       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5871d70c1b93ebae8684f234047b04324218bb8435d39dfdd41fc59b2c0925cc] <==
	W0804 00:00:32.702704       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.168:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.702809       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.168:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.740556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.168:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.740643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.168:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.935901       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.168:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.935969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.168:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:32.964858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.168:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:32.964932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.168:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.260851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.168:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.260984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.168:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.351043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.168:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.351228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.168:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:34.446203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.168:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	E0804 00:00:34.446339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.168:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.168:8443: connect: connection refused
	W0804 00:00:37.711889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 00:00:37.712337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 00:00:37.712779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:00:37.712894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 00:00:37.713014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:00:37.713101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0804 00:00:57.202851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:03:02.145491       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-64855\": pod busybox-fc5497c4f-64855 is already assigned to node \"ha-349588-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-64855" node="ha-349588-m04"
	E0804 00:03:02.146238       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1a726f77-4470-4e12-bdba-2f0395f06531(default/busybox-fc5497c4f-64855) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-64855"
	E0804 00:03:02.146417       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-64855\": pod busybox-fc5497c4f-64855 is already assigned to node \"ha-349588-m04\"" pod="default/busybox-fc5497c4f-64855"
	I0804 00:03:02.146520       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-64855" node="ha-349588-m04"
	
	
	==> kube-scheduler [f06167808735141cd81abf99d0e9c552e6c36b01fc9a595da5c7d0b2278ca802] <==
	W0803 23:58:09.600961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 23:58:09.601001       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0803 23:58:09.718084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:58:09.718242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0803 23:58:09.879521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:58:09.879701       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 23:58:09.993051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 23:58:09.993103       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 23:58:10.469187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:58:10.469286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 23:58:10.636027       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:58:10.636077       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:58:10.716843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:58:10.716900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0803 23:58:11.078253       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:58:11.078304       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:58:11.131750       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:58:11.131798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 23:58:11.708122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:58:11.708180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:58:11.736684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:58:11.736730       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0803 23:58:12.576485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:58:12.576571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:58:12.941217       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 00:01:11 ha-349588 kubelet[1373]: I0804 00:01:11.078533    1373 scope.go:117] "RemoveContainer" containerID="addbe1f2c028f4ddaec4443d286e965383d8db90811556d2989760374f63206e"
	Aug 04 00:01:13 ha-349588 kubelet[1373]: I0804 00:01:13.326476    1373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-4mwk4" podStartSLOduration=574.347075815 podStartE2EDuration="9m35.326448023s" podCreationTimestamp="2024-08-03 23:51:38 +0000 UTC" firstStartedPulling="2024-08-03 23:51:38.66567803 +0000 UTC m=+163.780373453" lastFinishedPulling="2024-08-03 23:51:39.645050226 +0000 UTC m=+164.759745661" observedRunningTime="2024-08-03 23:51:39.812711683 +0000 UTC m=+164.927407128" watchObservedRunningTime="2024-08-04 00:01:13.326448023 +0000 UTC m=+738.441143457"
	Aug 04 00:01:27 ha-349588 kubelet[1373]: I0804 00:01:27.080598    1373 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-349588" podUID="b3a4c252-ee5e-4b2f-b982-a09904a9c547"
	Aug 04 00:01:27 ha-349588 kubelet[1373]: I0804 00:01:27.108158    1373 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-349588"
	Aug 04 00:01:28 ha-349588 kubelet[1373]: I0804 00:01:28.010691    1373 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-349588" podUID="b3a4c252-ee5e-4b2f-b982-a09904a9c547"
	Aug 04 00:01:55 ha-349588 kubelet[1373]: E0804 00:01:55.141260    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:01:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:01:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:02:55 ha-349588 kubelet[1373]: E0804 00:02:55.141464    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:02:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:02:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:03:55 ha-349588 kubelet[1373]: E0804 00:03:55.140999    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:03:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:03:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:03:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:03:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:04:55 ha-349588 kubelet[1373]: E0804 00:04:55.140931    1373 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:04:55 ha-349588 kubelet[1373]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:04:55 ha-349588 kubelet[1373]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:04:55 ha-349588 kubelet[1373]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:04:55 ha-349588 kubelet[1373]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:05:39.044584  354889 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-323890/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-349588 -n ha-349588
helpers_test.go:261: (dbg) Run:  kubectl --context ha-349588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453015
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-453015
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-453015: exit status 82 (2m1.900712363s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-453015-m03"  ...
	* Stopping node "multinode-453015-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-453015" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453015 --wait=true -v=8 --alsologtostderr
E0804 00:25:27.465611  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 00:27:24.417162  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453015 --wait=true -v=8 --alsologtostderr: (3m14.11867879s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453015
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-453015 -n multinode-453015
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-453015 logs -n 25: (1.499992613s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015:/home/docker/cp-test_multinode-453015-m02_multinode-453015.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015 sudo cat                                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m02_multinode-453015.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03:/home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015-m03 sudo cat                                   | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp testdata/cp-test.txt                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015:/home/docker/cp-test_multinode-453015-m03_multinode-453015.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015 sudo cat                                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m03_multinode-453015.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02:/home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015-m02 sudo cat                                   | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-453015 node stop m03                                                          | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	| node    | multinode-453015 node start                                                             | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-453015                                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:23 UTC |                     |
	| stop    | -p multinode-453015                                                                     | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:23 UTC |                     |
	| start   | -p multinode-453015                                                                     | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:25 UTC | 04 Aug 24 00:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-453015                                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:25:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:25:24.496751  365167 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:25:24.497014  365167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:25:24.497024  365167 out.go:304] Setting ErrFile to fd 2...
	I0804 00:25:24.497028  365167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:25:24.497222  365167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:25:24.497853  365167 out.go:298] Setting JSON to false
	I0804 00:25:24.498850  365167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32872,"bootTime":1722698252,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:25:24.498939  365167 start.go:139] virtualization: kvm guest
	I0804 00:25:24.501166  365167 out.go:177] * [multinode-453015] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:25:24.502757  365167 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:25:24.502759  365167 notify.go:220] Checking for updates...
	I0804 00:25:24.504047  365167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:25:24.505361  365167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:25:24.506570  365167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:25:24.507779  365167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:25:24.509099  365167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:25:24.510779  365167 config.go:182] Loaded profile config "multinode-453015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:25:24.510876  365167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:25:24.511344  365167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:25:24.511399  365167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:25:24.527810  365167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0804 00:25:24.528277  365167 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:25:24.528968  365167 main.go:141] libmachine: Using API Version  1
	I0804 00:25:24.529003  365167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:25:24.529385  365167 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:25:24.529623  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.566602  365167 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:25:24.567934  365167 start.go:297] selected driver: kvm2
	I0804 00:25:24.567947  365167 start.go:901] validating driver "kvm2" against &{Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:25:24.568099  365167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:25:24.568408  365167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:25:24.568474  365167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:25:24.584291  365167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:25:24.585309  365167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:25:24.585387  365167 cni.go:84] Creating CNI manager for ""
	I0804 00:25:24.585404  365167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 00:25:24.585492  365167 start.go:340] cluster config:
	{Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:25:24.585720  365167 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:25:24.588184  365167 out.go:177] * Starting "multinode-453015" primary control-plane node in "multinode-453015" cluster
	I0804 00:25:24.589521  365167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:25:24.589567  365167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:25:24.589579  365167 cache.go:56] Caching tarball of preloaded images
	I0804 00:25:24.589670  365167 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:25:24.589693  365167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:25:24.589824  365167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/config.json ...
	I0804 00:25:24.590103  365167 start.go:360] acquireMachinesLock for multinode-453015: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:25:24.590166  365167 start.go:364] duration metric: took 37.785µs to acquireMachinesLock for "multinode-453015"
	I0804 00:25:24.590189  365167 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:25:24.590200  365167 fix.go:54] fixHost starting: 
	I0804 00:25:24.590469  365167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:25:24.590509  365167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:25:24.605373  365167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0804 00:25:24.605846  365167 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:25:24.606474  365167 main.go:141] libmachine: Using API Version  1
	I0804 00:25:24.606499  365167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:25:24.606900  365167 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:25:24.607143  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.607334  365167 main.go:141] libmachine: (multinode-453015) Calling .GetState
	I0804 00:25:24.609029  365167 fix.go:112] recreateIfNeeded on multinode-453015: state=Running err=<nil>
	W0804 00:25:24.609048  365167 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:25:24.611168  365167 out.go:177] * Updating the running kvm2 "multinode-453015" VM ...
	I0804 00:25:24.612577  365167 machine.go:94] provisionDockerMachine start ...
	I0804 00:25:24.612611  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.612888  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.615359  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.615914  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.615946  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.616119  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.616302  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.616461  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.616594  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.616757  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.616974  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.616984  365167 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:25:24.727400  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453015
	
	I0804 00:25:24.727436  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.727677  365167 buildroot.go:166] provisioning hostname "multinode-453015"
	I0804 00:25:24.727709  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.727919  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.730722  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.731106  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.731147  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.731266  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.731451  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.731608  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.731916  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.732109  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.732287  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.732302  365167 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-453015 && echo "multinode-453015" | sudo tee /etc/hostname
	I0804 00:25:24.854275  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453015
	
	I0804 00:25:24.854302  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.857285  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.857688  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.857730  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.857912  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.858128  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.858307  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.858446  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.858600  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.858781  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.858796  365167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-453015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-453015/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-453015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:25:24.966976  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:25:24.967022  365167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0804 00:25:24.967093  365167 buildroot.go:174] setting up certificates
	I0804 00:25:24.967106  365167 provision.go:84] configureAuth start
	I0804 00:25:24.967122  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.967565  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:25:24.970191  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.970550  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.970583  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.970747  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.973054  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.973427  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.973457  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.973584  365167 provision.go:143] copyHostCerts
	I0804 00:25:24.973614  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:25:24.973667  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0804 00:25:24.973678  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:25:24.973758  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0804 00:25:24.973882  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:25:24.973915  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0804 00:25:24.973925  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:25:24.973969  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0804 00:25:24.974032  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:25:24.974057  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0804 00:25:24.974066  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:25:24.974101  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0804 00:25:24.974164  365167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.multinode-453015 san=[127.0.0.1 192.168.39.23 localhost minikube multinode-453015]
	I0804 00:25:25.081152  365167 provision.go:177] copyRemoteCerts
	I0804 00:25:25.081295  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:25:25.081332  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:25.084040  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.084422  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:25.084450  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.084626  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:25.084828  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.085020  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:25.085157  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:25:25.173662  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:25:25.173743  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0804 00:25:25.202657  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:25:25.202753  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:25:25.245307  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:25:25.245380  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0804 00:25:25.274556  365167 provision.go:87] duration metric: took 307.435631ms to configureAuth
	I0804 00:25:25.274592  365167 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:25:25.274839  365167 config.go:182] Loaded profile config "multinode-453015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:25:25.274916  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:25.277593  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.277933  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:25.277976  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.278136  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:25.278318  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.278490  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.278607  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:25.278779  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:25.278956  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:25.278970  365167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:26:56.000723  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:26:56.000759  365167 machine.go:97] duration metric: took 1m31.388160451s to provisionDockerMachine
	I0804 00:26:56.000774  365167 start.go:293] postStartSetup for "multinode-453015" (driver="kvm2")
	I0804 00:26:56.000785  365167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:26:56.000805  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.001219  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:26:56.001283  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.004882  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.005311  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.005343  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.005456  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.005712  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.005917  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.006067  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.093811  365167 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:26:56.098373  365167 command_runner.go:130] > NAME=Buildroot
	I0804 00:26:56.098402  365167 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0804 00:26:56.098408  365167 command_runner.go:130] > ID=buildroot
	I0804 00:26:56.098429  365167 command_runner.go:130] > VERSION_ID=2023.02.9
	I0804 00:26:56.098436  365167 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0804 00:26:56.098483  365167 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:26:56.098502  365167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0804 00:26:56.098593  365167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0804 00:26:56.098693  365167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0804 00:26:56.098709  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0804 00:26:56.098798  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:26:56.109679  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:26:56.136006  365167 start.go:296] duration metric: took 135.215306ms for postStartSetup
	I0804 00:26:56.136054  365167 fix.go:56] duration metric: took 1m31.545854903s for fixHost
	I0804 00:26:56.136088  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.138687  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.139216  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.139244  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.139412  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.139650  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.139839  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.139998  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.140152  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:26:56.140388  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:26:56.140403  365167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:26:56.246368  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731216.226438620
	
	I0804 00:26:56.246400  365167 fix.go:216] guest clock: 1722731216.226438620
	I0804 00:26:56.246411  365167 fix.go:229] Guest: 2024-08-04 00:26:56.22643862 +0000 UTC Remote: 2024-08-04 00:26:56.136067969 +0000 UTC m=+91.677694190 (delta=90.370651ms)
	I0804 00:26:56.246440  365167 fix.go:200] guest clock delta is within tolerance: 90.370651ms
	I0804 00:26:56.246447  365167 start.go:83] releasing machines lock for "multinode-453015", held for 1m31.656268618s
	I0804 00:26:56.246508  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.246823  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:26:56.249371  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.249807  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.249830  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.249946  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250464  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250633  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250758  365167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:26:56.250801  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.250862  365167 ssh_runner.go:195] Run: cat /version.json
	I0804 00:26:56.250891  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.253545  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.253836  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254019  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.254048  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254173  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.254178  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.254196  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254335  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.254407  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.254522  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.254587  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.254653  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.254704  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.254841  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.330901  365167 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0804 00:26:56.331124  365167 ssh_runner.go:195] Run: systemctl --version
	I0804 00:26:56.354072  365167 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 00:26:56.354913  365167 command_runner.go:130] > systemd 252 (252)
	I0804 00:26:56.354967  365167 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0804 00:26:56.355035  365167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:26:56.521151  365167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 00:26:56.528298  365167 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0804 00:26:56.528352  365167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:26:56.528416  365167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:26:56.538664  365167 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 00:26:56.538699  365167 start.go:495] detecting cgroup driver to use...
	I0804 00:26:56.538772  365167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:26:56.558580  365167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:26:56.573289  365167 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:26:56.573355  365167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:26:56.588131  365167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:26:56.602596  365167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:26:56.760968  365167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:26:56.922806  365167 docker.go:233] disabling docker service ...
	I0804 00:26:56.922891  365167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:26:56.943968  365167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:26:56.959205  365167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:26:57.108075  365167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:26:57.248638  365167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:26:57.265297  365167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:26:57.285174  365167 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0804 00:26:57.285218  365167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:26:57.285282  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.302481  365167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:26:57.302563  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.316640  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.330841  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.342444  365167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:26:57.390376  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.424164  365167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.453285  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.474444  365167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:26:57.485887  365167 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 00:26:57.486184  365167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:26:57.497288  365167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:26:57.655627  365167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:26:57.927356  365167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:26:57.927428  365167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:26:57.932469  365167 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0804 00:26:57.932504  365167 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 00:26:57.932511  365167 command_runner.go:130] > Device: 0,22	Inode: 1396        Links: 1
	I0804 00:26:57.932517  365167 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 00:26:57.932532  365167 command_runner.go:130] > Access: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932543  365167 command_runner.go:130] > Modify: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932550  365167 command_runner.go:130] > Change: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932556  365167 command_runner.go:130] >  Birth: -
	I0804 00:26:57.932601  365167 start.go:563] Will wait 60s for crictl version
	I0804 00:26:57.932659  365167 ssh_runner.go:195] Run: which crictl
	I0804 00:26:57.936968  365167 command_runner.go:130] > /usr/bin/crictl
	I0804 00:26:57.937058  365167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:26:57.983207  365167 command_runner.go:130] > Version:  0.1.0
	I0804 00:26:57.983235  365167 command_runner.go:130] > RuntimeName:  cri-o
	I0804 00:26:57.983240  365167 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0804 00:26:57.983245  365167 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 00:26:57.983399  365167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:26:57.983510  365167 ssh_runner.go:195] Run: crio --version
	I0804 00:26:58.013573  365167 command_runner.go:130] > crio version 1.29.1
	I0804 00:26:58.013600  365167 command_runner.go:130] > Version:        1.29.1
	I0804 00:26:58.013608  365167 command_runner.go:130] > GitCommit:      unknown
	I0804 00:26:58.013614  365167 command_runner.go:130] > GitCommitDate:  unknown
	I0804 00:26:58.013619  365167 command_runner.go:130] > GitTreeState:   clean
	I0804 00:26:58.013626  365167 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 00:26:58.013631  365167 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 00:26:58.013636  365167 command_runner.go:130] > Compiler:       gc
	I0804 00:26:58.013642  365167 command_runner.go:130] > Platform:       linux/amd64
	I0804 00:26:58.013648  365167 command_runner.go:130] > Linkmode:       dynamic
	I0804 00:26:58.013655  365167 command_runner.go:130] > BuildTags:      
	I0804 00:26:58.013662  365167 command_runner.go:130] >   containers_image_ostree_stub
	I0804 00:26:58.013669  365167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 00:26:58.013679  365167 command_runner.go:130] >   btrfs_noversion
	I0804 00:26:58.013687  365167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 00:26:58.013694  365167 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 00:26:58.013700  365167 command_runner.go:130] >   seccomp
	I0804 00:26:58.013710  365167 command_runner.go:130] > LDFlags:          unknown
	I0804 00:26:58.013717  365167 command_runner.go:130] > SeccompEnabled:   true
	I0804 00:26:58.013735  365167 command_runner.go:130] > AppArmorEnabled:  false
	I0804 00:26:58.015071  365167 ssh_runner.go:195] Run: crio --version
	I0804 00:26:58.046148  365167 command_runner.go:130] > crio version 1.29.1
	I0804 00:26:58.046178  365167 command_runner.go:130] > Version:        1.29.1
	I0804 00:26:58.046186  365167 command_runner.go:130] > GitCommit:      unknown
	I0804 00:26:58.046192  365167 command_runner.go:130] > GitCommitDate:  unknown
	I0804 00:26:58.046197  365167 command_runner.go:130] > GitTreeState:   clean
	I0804 00:26:58.046203  365167 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 00:26:58.046207  365167 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 00:26:58.046211  365167 command_runner.go:130] > Compiler:       gc
	I0804 00:26:58.046217  365167 command_runner.go:130] > Platform:       linux/amd64
	I0804 00:26:58.046223  365167 command_runner.go:130] > Linkmode:       dynamic
	I0804 00:26:58.046235  365167 command_runner.go:130] > BuildTags:      
	I0804 00:26:58.046242  365167 command_runner.go:130] >   containers_image_ostree_stub
	I0804 00:26:58.046251  365167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 00:26:58.046261  365167 command_runner.go:130] >   btrfs_noversion
	I0804 00:26:58.046270  365167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 00:26:58.046280  365167 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 00:26:58.046289  365167 command_runner.go:130] >   seccomp
	I0804 00:26:58.046299  365167 command_runner.go:130] > LDFlags:          unknown
	I0804 00:26:58.046308  365167 command_runner.go:130] > SeccompEnabled:   true
	I0804 00:26:58.046318  365167 command_runner.go:130] > AppArmorEnabled:  false
	I0804 00:26:58.049234  365167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:26:58.050832  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:26:58.053788  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:58.054115  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:58.054147  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:58.054381  365167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:26:58.059352  365167 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0804 00:26:58.059504  365167 kubeadm.go:883] updating cluster {Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:26:58.059763  365167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:26:58.059827  365167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:26:58.105400  365167 command_runner.go:130] > {
	I0804 00:26:58.105433  365167 command_runner.go:130] >   "images": [
	I0804 00:26:58.105439  365167 command_runner.go:130] >     {
	I0804 00:26:58.105452  365167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 00:26:58.105459  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105468  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 00:26:58.105474  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105481  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105495  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 00:26:58.105516  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 00:26:58.105526  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105532  365167 command_runner.go:130] >       "size": "87165492",
	I0804 00:26:58.105538  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105546  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105561  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105570  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105576  365167 command_runner.go:130] >     },
	I0804 00:26:58.105584  365167 command_runner.go:130] >     {
	I0804 00:26:58.105606  365167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 00:26:58.105616  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105624  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 00:26:58.105629  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105636  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105648  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 00:26:58.105673  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 00:26:58.105681  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105688  365167 command_runner.go:130] >       "size": "87174707",
	I0804 00:26:58.105697  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105712  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105721  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105732  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105739  365167 command_runner.go:130] >     },
	I0804 00:26:58.105745  365167 command_runner.go:130] >     {
	I0804 00:26:58.105757  365167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 00:26:58.105766  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105776  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 00:26:58.105783  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105789  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105802  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 00:26:58.105815  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 00:26:58.105822  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105828  365167 command_runner.go:130] >       "size": "1363676",
	I0804 00:26:58.105836  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105844  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105852  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105860  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105865  365167 command_runner.go:130] >     },
	I0804 00:26:58.105872  365167 command_runner.go:130] >     {
	I0804 00:26:58.105881  365167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 00:26:58.105889  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105899  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 00:26:58.105907  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105913  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105960  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 00:26:58.105998  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 00:26:58.106006  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106012  365167 command_runner.go:130] >       "size": "31470524",
	I0804 00:26:58.106020  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106029  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106038  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106053  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106061  365167 command_runner.go:130] >     },
	I0804 00:26:58.106068  365167 command_runner.go:130] >     {
	I0804 00:26:58.106079  365167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 00:26:58.106087  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106099  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 00:26:58.106106  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106116  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106131  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 00:26:58.106145  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 00:26:58.106153  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106161  365167 command_runner.go:130] >       "size": "61245718",
	I0804 00:26:58.106170  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106179  365167 command_runner.go:130] >       "username": "nonroot",
	I0804 00:26:58.106188  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106197  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106205  365167 command_runner.go:130] >     },
	I0804 00:26:58.106210  365167 command_runner.go:130] >     {
	I0804 00:26:58.106222  365167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 00:26:58.106231  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106239  365167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 00:26:58.106248  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106256  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106269  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 00:26:58.106281  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 00:26:58.106290  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106300  365167 command_runner.go:130] >       "size": "150779692",
	I0804 00:26:58.106309  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106319  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106324  365167 command_runner.go:130] >       },
	I0804 00:26:58.106331  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106340  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106348  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106356  365167 command_runner.go:130] >     },
	I0804 00:26:58.106361  365167 command_runner.go:130] >     {
	I0804 00:26:58.106371  365167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 00:26:58.106389  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106400  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 00:26:58.106409  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106415  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106427  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 00:26:58.106440  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 00:26:58.106447  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106456  365167 command_runner.go:130] >       "size": "117609954",
	I0804 00:26:58.106464  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106473  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106480  365167 command_runner.go:130] >       },
	I0804 00:26:58.106486  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106495  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106500  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106507  365167 command_runner.go:130] >     },
	I0804 00:26:58.106512  365167 command_runner.go:130] >     {
	I0804 00:26:58.106523  365167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 00:26:58.106531  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106541  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 00:26:58.106549  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106557  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106593  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 00:26:58.106608  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 00:26:58.106617  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106626  365167 command_runner.go:130] >       "size": "112198984",
	I0804 00:26:58.106634  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106642  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106648  365167 command_runner.go:130] >       },
	I0804 00:26:58.106657  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106662  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106667  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106672  365167 command_runner.go:130] >     },
	I0804 00:26:58.106676  365167 command_runner.go:130] >     {
	I0804 00:26:58.106685  365167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 00:26:58.106690  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106698  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 00:26:58.106713  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106720  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106731  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 00:26:58.106741  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 00:26:58.106747  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106753  365167 command_runner.go:130] >       "size": "85953945",
	I0804 00:26:58.106758  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106764  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106770  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106780  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106787  365167 command_runner.go:130] >     },
	I0804 00:26:58.106793  365167 command_runner.go:130] >     {
	I0804 00:26:58.106804  365167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 00:26:58.106813  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106823  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 00:26:58.106831  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106839  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106850  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 00:26:58.106864  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 00:26:58.106872  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106878  365167 command_runner.go:130] >       "size": "63051080",
	I0804 00:26:58.106886  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106894  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106902  365167 command_runner.go:130] >       },
	I0804 00:26:58.106909  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106917  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106926  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106934  365167 command_runner.go:130] >     },
	I0804 00:26:58.106940  365167 command_runner.go:130] >     {
	I0804 00:26:58.106951  365167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 00:26:58.106959  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106967  365167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 00:26:58.106982  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106987  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106998  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 00:26:58.107010  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 00:26:58.107029  365167 command_runner.go:130] >       ],
	I0804 00:26:58.107039  365167 command_runner.go:130] >       "size": "750414",
	I0804 00:26:58.107045  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.107068  365167 command_runner.go:130] >         "value": "65535"
	I0804 00:26:58.107077  365167 command_runner.go:130] >       },
	I0804 00:26:58.107083  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.107092  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.107098  365167 command_runner.go:130] >       "pinned": true
	I0804 00:26:58.107105  365167 command_runner.go:130] >     }
	I0804 00:26:58.107110  365167 command_runner.go:130] >   ]
	I0804 00:26:58.107118  365167 command_runner.go:130] > }
	I0804 00:26:58.107414  365167 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:26:58.107434  365167 crio.go:433] Images already preloaded, skipping extraction
	I0804 00:26:58.107496  365167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:26:58.141278  365167 command_runner.go:130] > {
	I0804 00:26:58.141307  365167 command_runner.go:130] >   "images": [
	I0804 00:26:58.141314  365167 command_runner.go:130] >     {
	I0804 00:26:58.141337  365167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 00:26:58.141344  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141354  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 00:26:58.141359  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141366  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141379  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 00:26:58.141392  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 00:26:58.141399  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141405  365167 command_runner.go:130] >       "size": "87165492",
	I0804 00:26:58.141414  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141420  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141430  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141438  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141444  365167 command_runner.go:130] >     },
	I0804 00:26:58.141451  365167 command_runner.go:130] >     {
	I0804 00:26:58.141459  365167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 00:26:58.141466  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141474  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 00:26:58.141482  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141487  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141499  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 00:26:58.141523  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 00:26:58.141533  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141539  365167 command_runner.go:130] >       "size": "87174707",
	I0804 00:26:58.141547  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141560  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141568  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141576  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141584  365167 command_runner.go:130] >     },
	I0804 00:26:58.141589  365167 command_runner.go:130] >     {
	I0804 00:26:58.141601  365167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 00:26:58.141610  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141621  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 00:26:58.141629  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141635  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141648  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 00:26:58.141667  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 00:26:58.141675  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141681  365167 command_runner.go:130] >       "size": "1363676",
	I0804 00:26:58.141687  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141696  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141704  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141712  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141718  365167 command_runner.go:130] >     },
	I0804 00:26:58.141723  365167 command_runner.go:130] >     {
	I0804 00:26:58.141734  365167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 00:26:58.141742  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141753  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 00:26:58.141760  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141769  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141780  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 00:26:58.141807  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 00:26:58.141815  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141819  365167 command_runner.go:130] >       "size": "31470524",
	I0804 00:26:58.141828  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141837  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141846  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141855  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141863  365167 command_runner.go:130] >     },
	I0804 00:26:58.141871  365167 command_runner.go:130] >     {
	I0804 00:26:58.141880  365167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 00:26:58.141890  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141900  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 00:26:58.141908  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141914  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141928  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 00:26:58.141943  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 00:26:58.141951  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141960  365167 command_runner.go:130] >       "size": "61245718",
	I0804 00:26:58.141976  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141985  365167 command_runner.go:130] >       "username": "nonroot",
	I0804 00:26:58.141991  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142007  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142015  365167 command_runner.go:130] >     },
	I0804 00:26:58.142020  365167 command_runner.go:130] >     {
	I0804 00:26:58.142031  365167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 00:26:58.142041  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142050  365167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 00:26:58.142059  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142067  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142077  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 00:26:58.142089  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 00:26:58.142097  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142103  365167 command_runner.go:130] >       "size": "150779692",
	I0804 00:26:58.142111  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142120  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142129  365167 command_runner.go:130] >       },
	I0804 00:26:58.142137  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142146  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142155  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142159  365167 command_runner.go:130] >     },
	I0804 00:26:58.142166  365167 command_runner.go:130] >     {
	I0804 00:26:58.142175  365167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 00:26:58.142183  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142192  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 00:26:58.142199  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142206  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142221  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 00:26:58.142233  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 00:26:58.142241  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142247  365167 command_runner.go:130] >       "size": "117609954",
	I0804 00:26:58.142254  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142260  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142267  365167 command_runner.go:130] >       },
	I0804 00:26:58.142273  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142282  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142290  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142298  365167 command_runner.go:130] >     },
	I0804 00:26:58.142313  365167 command_runner.go:130] >     {
	I0804 00:26:58.142325  365167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 00:26:58.142333  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142342  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 00:26:58.142350  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142356  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142391  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 00:26:58.142406  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 00:26:58.142412  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142419  365167 command_runner.go:130] >       "size": "112198984",
	I0804 00:26:58.142427  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142433  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142440  365167 command_runner.go:130] >       },
	I0804 00:26:58.142456  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142465  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142472  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142479  365167 command_runner.go:130] >     },
	I0804 00:26:58.142485  365167 command_runner.go:130] >     {
	I0804 00:26:58.142497  365167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 00:26:58.142505  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142513  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 00:26:58.142520  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142527  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142538  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 00:26:58.142553  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 00:26:58.142561  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142567  365167 command_runner.go:130] >       "size": "85953945",
	I0804 00:26:58.142576  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.142582  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142591  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142598  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142606  365167 command_runner.go:130] >     },
	I0804 00:26:58.142611  365167 command_runner.go:130] >     {
	I0804 00:26:58.142621  365167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 00:26:58.142630  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142639  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 00:26:58.142656  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142677  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142691  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 00:26:58.142704  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 00:26:58.142709  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142719  365167 command_runner.go:130] >       "size": "63051080",
	I0804 00:26:58.142725  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142733  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142739  365167 command_runner.go:130] >       },
	I0804 00:26:58.142765  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142774  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142781  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142789  365167 command_runner.go:130] >     },
	I0804 00:26:58.142794  365167 command_runner.go:130] >     {
	I0804 00:26:58.142805  365167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 00:26:58.142814  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142821  365167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 00:26:58.142829  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142835  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142848  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 00:26:58.142864  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 00:26:58.142872  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142879  365167 command_runner.go:130] >       "size": "750414",
	I0804 00:26:58.142888  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142895  365167 command_runner.go:130] >         "value": "65535"
	I0804 00:26:58.142900  365167 command_runner.go:130] >       },
	I0804 00:26:58.142908  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142914  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142923  365167 command_runner.go:130] >       "pinned": true
	I0804 00:26:58.142928  365167 command_runner.go:130] >     }
	I0804 00:26:58.142935  365167 command_runner.go:130] >   ]
	I0804 00:26:58.142940  365167 command_runner.go:130] > }
	I0804 00:26:58.143152  365167 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:26:58.143168  365167 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:26:58.143177  365167 kubeadm.go:934] updating node { 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0804 00:26:58.143327  365167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-453015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:26:58.143409  365167 ssh_runner.go:195] Run: crio config
	I0804 00:26:58.186081  365167 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0804 00:26:58.186118  365167 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0804 00:26:58.186128  365167 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0804 00:26:58.186133  365167 command_runner.go:130] > #
	I0804 00:26:58.186165  365167 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0804 00:26:58.186176  365167 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0804 00:26:58.186186  365167 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0804 00:26:58.186199  365167 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0804 00:26:58.186206  365167 command_runner.go:130] > # reload'.
	I0804 00:26:58.186215  365167 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0804 00:26:58.186230  365167 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0804 00:26:58.186239  365167 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0804 00:26:58.186249  365167 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0804 00:26:58.186255  365167 command_runner.go:130] > [crio]
	I0804 00:26:58.186264  365167 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0804 00:26:58.186275  365167 command_runner.go:130] > # containers images, in this directory.
	I0804 00:26:58.186283  365167 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0804 00:26:58.186300  365167 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0804 00:26:58.186308  365167 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0804 00:26:58.186321  365167 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0804 00:26:58.186644  365167 command_runner.go:130] > # imagestore = ""
	I0804 00:26:58.186669  365167 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0804 00:26:58.186679  365167 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0804 00:26:58.186789  365167 command_runner.go:130] > storage_driver = "overlay"
	I0804 00:26:58.186806  365167 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0804 00:26:58.186822  365167 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0804 00:26:58.186828  365167 command_runner.go:130] > storage_option = [
	I0804 00:26:58.187045  365167 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0804 00:26:58.187061  365167 command_runner.go:130] > ]
	I0804 00:26:58.187072  365167 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0804 00:26:58.187080  365167 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0804 00:26:58.187116  365167 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0804 00:26:58.187129  365167 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0804 00:26:58.187135  365167 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0804 00:26:58.187156  365167 command_runner.go:130] > # always happen on a node reboot
	I0804 00:26:58.187369  365167 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0804 00:26:58.187388  365167 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0804 00:26:58.187395  365167 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0804 00:26:58.187400  365167 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0804 00:26:58.187502  365167 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0804 00:26:58.187525  365167 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0804 00:26:58.187539  365167 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0804 00:26:58.187825  365167 command_runner.go:130] > # internal_wipe = true
	I0804 00:26:58.187844  365167 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0804 00:26:58.187853  365167 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0804 00:26:58.188056  365167 command_runner.go:130] > # internal_repair = false
	I0804 00:26:58.188067  365167 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0804 00:26:58.188074  365167 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0804 00:26:58.188079  365167 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0804 00:26:58.188336  365167 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0804 00:26:58.188352  365167 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0804 00:26:58.188359  365167 command_runner.go:130] > [crio.api]
	I0804 00:26:58.188367  365167 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0804 00:26:58.188596  365167 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0804 00:26:58.188612  365167 command_runner.go:130] > # IP address on which the stream server will listen.
	I0804 00:26:58.188943  365167 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0804 00:26:58.188972  365167 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0804 00:26:58.188981  365167 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0804 00:26:58.189240  365167 command_runner.go:130] > # stream_port = "0"
	I0804 00:26:58.189255  365167 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0804 00:26:58.189538  365167 command_runner.go:130] > # stream_enable_tls = false
	I0804 00:26:58.189555  365167 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0804 00:26:58.189843  365167 command_runner.go:130] > # stream_idle_timeout = ""
	I0804 00:26:58.189858  365167 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0804 00:26:58.189868  365167 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0804 00:26:58.189874  365167 command_runner.go:130] > # minutes.
	I0804 00:26:58.190088  365167 command_runner.go:130] > # stream_tls_cert = ""
	I0804 00:26:58.190104  365167 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0804 00:26:58.190114  365167 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0804 00:26:58.190288  365167 command_runner.go:130] > # stream_tls_key = ""
	I0804 00:26:58.190301  365167 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0804 00:26:58.190311  365167 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0804 00:26:58.190364  365167 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0804 00:26:58.190556  365167 command_runner.go:130] > # stream_tls_ca = ""
	I0804 00:26:58.190569  365167 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 00:26:58.190761  365167 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0804 00:26:58.190772  365167 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 00:26:58.190887  365167 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0804 00:26:58.190910  365167 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0804 00:26:58.190923  365167 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0804 00:26:58.190930  365167 command_runner.go:130] > [crio.runtime]
	I0804 00:26:58.190936  365167 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0804 00:26:58.190945  365167 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0804 00:26:58.190948  365167 command_runner.go:130] > # "nofile=1024:2048"
	I0804 00:26:58.190956  365167 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0804 00:26:58.191058  365167 command_runner.go:130] > # default_ulimits = [
	I0804 00:26:58.191211  365167 command_runner.go:130] > # ]
	I0804 00:26:58.191221  365167 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0804 00:26:58.191545  365167 command_runner.go:130] > # no_pivot = false
	I0804 00:26:58.191561  365167 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0804 00:26:58.191571  365167 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0804 00:26:58.191994  365167 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0804 00:26:58.192010  365167 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0804 00:26:58.192018  365167 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0804 00:26:58.192028  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 00:26:58.192095  365167 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0804 00:26:58.192115  365167 command_runner.go:130] > # Cgroup setting for conmon
	I0804 00:26:58.192126  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0804 00:26:58.192257  365167 command_runner.go:130] > conmon_cgroup = "pod"
	I0804 00:26:58.192272  365167 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0804 00:26:58.192280  365167 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0804 00:26:58.192290  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 00:26:58.192299  365167 command_runner.go:130] > conmon_env = [
	I0804 00:26:58.192362  365167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 00:26:58.192419  365167 command_runner.go:130] > ]
	I0804 00:26:58.192431  365167 command_runner.go:130] > # Additional environment variables to set for all the
	I0804 00:26:58.192441  365167 command_runner.go:130] > # containers. These are overridden if set in the
	I0804 00:26:58.192453  365167 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0804 00:26:58.192568  365167 command_runner.go:130] > # default_env = [
	I0804 00:26:58.192778  365167 command_runner.go:130] > # ]
	I0804 00:26:58.192796  365167 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0804 00:26:58.192807  365167 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0804 00:26:58.192839  365167 command_runner.go:130] > # selinux = false
	I0804 00:26:58.192855  365167 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0804 00:26:58.192868  365167 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0804 00:26:58.192881  365167 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0804 00:26:58.192906  365167 command_runner.go:130] > # seccomp_profile = ""
	I0804 00:26:58.192920  365167 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0804 00:26:58.192930  365167 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0804 00:26:58.192949  365167 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0804 00:26:58.192961  365167 command_runner.go:130] > # which might increase security.
	I0804 00:26:58.192972  365167 command_runner.go:130] > # This option is currently deprecated,
	I0804 00:26:58.192984  365167 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0804 00:26:58.192994  365167 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0804 00:26:58.193007  365167 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0804 00:26:58.193019  365167 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0804 00:26:58.193033  365167 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0804 00:26:58.193042  365167 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0804 00:26:58.193053  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.193065  365167 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0804 00:26:58.193077  365167 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0804 00:26:58.193087  365167 command_runner.go:130] > # the cgroup blockio controller.
	I0804 00:26:58.193094  365167 command_runner.go:130] > # blockio_config_file = ""
	I0804 00:26:58.193106  365167 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0804 00:26:58.193116  365167 command_runner.go:130] > # blockio parameters.
	I0804 00:26:58.193128  365167 command_runner.go:130] > # blockio_reload = false
	I0804 00:26:58.193137  365167 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0804 00:26:58.193146  365167 command_runner.go:130] > # irqbalance daemon.
	I0804 00:26:58.193155  365167 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0804 00:26:58.193167  365167 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0804 00:26:58.193181  365167 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0804 00:26:58.193194  365167 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0804 00:26:58.193206  365167 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0804 00:26:58.193220  365167 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0804 00:26:58.193232  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.193248  365167 command_runner.go:130] > # rdt_config_file = ""
	I0804 00:26:58.193260  365167 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0804 00:26:58.193268  365167 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0804 00:26:58.193313  365167 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0804 00:26:58.193324  365167 command_runner.go:130] > # separate_pull_cgroup = ""
	I0804 00:26:58.193335  365167 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0804 00:26:58.193347  365167 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0804 00:26:58.193355  365167 command_runner.go:130] > # will be added.
	I0804 00:26:58.193362  365167 command_runner.go:130] > # default_capabilities = [
	I0804 00:26:58.193370  365167 command_runner.go:130] > # 	"CHOWN",
	I0804 00:26:58.193377  365167 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0804 00:26:58.193386  365167 command_runner.go:130] > # 	"FSETID",
	I0804 00:26:58.193392  365167 command_runner.go:130] > # 	"FOWNER",
	I0804 00:26:58.193398  365167 command_runner.go:130] > # 	"SETGID",
	I0804 00:26:58.193407  365167 command_runner.go:130] > # 	"SETUID",
	I0804 00:26:58.193413  365167 command_runner.go:130] > # 	"SETPCAP",
	I0804 00:26:58.193422  365167 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0804 00:26:58.193429  365167 command_runner.go:130] > # 	"KILL",
	I0804 00:26:58.193437  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193449  365167 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0804 00:26:58.193462  365167 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0804 00:26:58.193473  365167 command_runner.go:130] > # add_inheritable_capabilities = false
	I0804 00:26:58.193484  365167 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0804 00:26:58.193498  365167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 00:26:58.193515  365167 command_runner.go:130] > default_sysctls = [
	I0804 00:26:58.193526  365167 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0804 00:26:58.193532  365167 command_runner.go:130] > ]
	I0804 00:26:58.193539  365167 command_runner.go:130] > # List of devices on the host that a
	I0804 00:26:58.193549  365167 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0804 00:26:58.193558  365167 command_runner.go:130] > # allowed_devices = [
	I0804 00:26:58.193565  365167 command_runner.go:130] > # 	"/dev/fuse",
	I0804 00:26:58.193572  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193580  365167 command_runner.go:130] > # List of additional devices. specified as
	I0804 00:26:58.193594  365167 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0804 00:26:58.193606  365167 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0804 00:26:58.193618  365167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 00:26:58.193635  365167 command_runner.go:130] > # additional_devices = [
	I0804 00:26:58.193643  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193649  365167 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0804 00:26:58.193656  365167 command_runner.go:130] > # cdi_spec_dirs = [
	I0804 00:26:58.193659  365167 command_runner.go:130] > # 	"/etc/cdi",
	I0804 00:26:58.193663  365167 command_runner.go:130] > # 	"/var/run/cdi",
	I0804 00:26:58.193666  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193673  365167 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0804 00:26:58.193681  365167 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0804 00:26:58.193685  365167 command_runner.go:130] > # Defaults to false.
	I0804 00:26:58.193694  365167 command_runner.go:130] > # device_ownership_from_security_context = false
	I0804 00:26:58.193703  365167 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0804 00:26:58.193710  365167 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0804 00:26:58.193720  365167 command_runner.go:130] > # hooks_dir = [
	I0804 00:26:58.193728  365167 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0804 00:26:58.193743  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193753  365167 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0804 00:26:58.193766  365167 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0804 00:26:58.193773  365167 command_runner.go:130] > # its default mounts from the following two files:
	I0804 00:26:58.193781  365167 command_runner.go:130] > #
	I0804 00:26:58.193791  365167 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0804 00:26:58.193804  365167 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0804 00:26:58.193815  365167 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0804 00:26:58.193823  365167 command_runner.go:130] > #
	I0804 00:26:58.193832  365167 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0804 00:26:58.193844  365167 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0804 00:26:58.193856  365167 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0804 00:26:58.193864  365167 command_runner.go:130] > #      only add mounts it finds in this file.
	I0804 00:26:58.193873  365167 command_runner.go:130] > #
	I0804 00:26:58.193879  365167 command_runner.go:130] > # default_mounts_file = ""
	I0804 00:26:58.193891  365167 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0804 00:26:58.193903  365167 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0804 00:26:58.193911  365167 command_runner.go:130] > pids_limit = 1024
	I0804 00:26:58.193920  365167 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0804 00:26:58.193933  365167 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0804 00:26:58.193954  365167 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0804 00:26:58.193978  365167 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0804 00:26:58.193987  365167 command_runner.go:130] > # log_size_max = -1
	I0804 00:26:58.193998  365167 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0804 00:26:58.194007  365167 command_runner.go:130] > # log_to_journald = false
	I0804 00:26:58.194017  365167 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0804 00:26:58.194029  365167 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0804 00:26:58.194041  365167 command_runner.go:130] > # Path to directory for container attach sockets.
	I0804 00:26:58.194052  365167 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0804 00:26:58.194063  365167 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0804 00:26:58.194073  365167 command_runner.go:130] > # bind_mount_prefix = ""
	I0804 00:26:58.194080  365167 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0804 00:26:58.194089  365167 command_runner.go:130] > # read_only = false
	I0804 00:26:58.194096  365167 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0804 00:26:58.194108  365167 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0804 00:26:58.194117  365167 command_runner.go:130] > # live configuration reload.
	I0804 00:26:58.194124  365167 command_runner.go:130] > # log_level = "info"
	I0804 00:26:58.194136  365167 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0804 00:26:58.194146  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.194154  365167 command_runner.go:130] > # log_filter = ""
	I0804 00:26:58.194175  365167 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0804 00:26:58.194188  365167 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0804 00:26:58.194197  365167 command_runner.go:130] > # separated by comma.
	I0804 00:26:58.194208  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194217  365167 command_runner.go:130] > # uid_mappings = ""
	I0804 00:26:58.194228  365167 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0804 00:26:58.194240  365167 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0804 00:26:58.194250  365167 command_runner.go:130] > # separated by comma.
	I0804 00:26:58.194261  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194270  365167 command_runner.go:130] > # gid_mappings = ""
	I0804 00:26:58.194279  365167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0804 00:26:58.194305  365167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 00:26:58.194317  365167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 00:26:58.194328  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194338  365167 command_runner.go:130] > # minimum_mappable_uid = -1
	I0804 00:26:58.194348  365167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0804 00:26:58.194360  365167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 00:26:58.194378  365167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 00:26:58.194393  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194403  365167 command_runner.go:130] > # minimum_mappable_gid = -1
	I0804 00:26:58.194412  365167 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0804 00:26:58.194424  365167 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0804 00:26:58.194436  365167 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0804 00:26:58.194444  365167 command_runner.go:130] > # ctr_stop_timeout = 30
	I0804 00:26:58.194455  365167 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0804 00:26:58.194466  365167 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0804 00:26:58.194475  365167 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0804 00:26:58.194486  365167 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0804 00:26:58.194495  365167 command_runner.go:130] > drop_infra_ctr = false
	I0804 00:26:58.194504  365167 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0804 00:26:58.194516  365167 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0804 00:26:58.194531  365167 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0804 00:26:58.194541  365167 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0804 00:26:58.194553  365167 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0804 00:26:58.194565  365167 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0804 00:26:58.194576  365167 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0804 00:26:58.194588  365167 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0804 00:26:58.194597  365167 command_runner.go:130] > # shared_cpuset = ""
	I0804 00:26:58.194606  365167 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0804 00:26:58.194614  365167 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0804 00:26:58.194621  365167 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0804 00:26:58.194640  365167 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0804 00:26:58.194650  365167 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0804 00:26:58.194659  365167 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0804 00:26:58.194672  365167 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0804 00:26:58.194681  365167 command_runner.go:130] > # enable_criu_support = false
	I0804 00:26:58.194689  365167 command_runner.go:130] > # Enable/disable the generation of the container,
	I0804 00:26:58.194698  365167 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0804 00:26:58.194702  365167 command_runner.go:130] > # enable_pod_events = false
	I0804 00:26:58.194712  365167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 00:26:58.194725  365167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 00:26:58.194736  365167 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0804 00:26:58.194747  365167 command_runner.go:130] > # default_runtime = "runc"
	I0804 00:26:58.194767  365167 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0804 00:26:58.194780  365167 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0804 00:26:58.194796  365167 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0804 00:26:58.194809  365167 command_runner.go:130] > # creation as a file is not desired either.
	I0804 00:26:58.194822  365167 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0804 00:26:58.194834  365167 command_runner.go:130] > # the hostname is being managed dynamically.
	I0804 00:26:58.194840  365167 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0804 00:26:58.194846  365167 command_runner.go:130] > # ]
	I0804 00:26:58.194859  365167 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0804 00:26:58.194872  365167 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0804 00:26:58.194885  365167 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0804 00:26:58.194893  365167 command_runner.go:130] > # Each entry in the table should follow the format:
	I0804 00:26:58.194901  365167 command_runner.go:130] > #
	I0804 00:26:58.194910  365167 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0804 00:26:58.194921  365167 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0804 00:26:58.194992  365167 command_runner.go:130] > # runtime_type = "oci"
	I0804 00:26:58.195004  365167 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0804 00:26:58.195011  365167 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0804 00:26:58.195019  365167 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0804 00:26:58.195030  365167 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0804 00:26:58.195038  365167 command_runner.go:130] > # monitor_env = []
	I0804 00:26:58.195049  365167 command_runner.go:130] > # privileged_without_host_devices = false
	I0804 00:26:58.195058  365167 command_runner.go:130] > # allowed_annotations = []
	I0804 00:26:58.195067  365167 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0804 00:26:58.195076  365167 command_runner.go:130] > # Where:
	I0804 00:26:58.195083  365167 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0804 00:26:58.195095  365167 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0804 00:26:58.195105  365167 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0804 00:26:58.195118  365167 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0804 00:26:58.195127  365167 command_runner.go:130] > #   in $PATH.
	I0804 00:26:58.195136  365167 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0804 00:26:58.195147  365167 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0804 00:26:58.195159  365167 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0804 00:26:58.195163  365167 command_runner.go:130] > #   state.
	I0804 00:26:58.195170  365167 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0804 00:26:58.195182  365167 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0804 00:26:58.195202  365167 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0804 00:26:58.195220  365167 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0804 00:26:58.195233  365167 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0804 00:26:58.195246  365167 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0804 00:26:58.195257  365167 command_runner.go:130] > #   The currently recognized values are:
	I0804 00:26:58.195266  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0804 00:26:58.195280  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0804 00:26:58.195293  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0804 00:26:58.195305  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0804 00:26:58.195319  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0804 00:26:58.195332  365167 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0804 00:26:58.195345  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0804 00:26:58.195356  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0804 00:26:58.195365  365167 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0804 00:26:58.195377  365167 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0804 00:26:58.195387  365167 command_runner.go:130] > #   deprecated option "conmon".
	I0804 00:26:58.195398  365167 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0804 00:26:58.195410  365167 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0804 00:26:58.195423  365167 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0804 00:26:58.195433  365167 command_runner.go:130] > #   should be moved to the container's cgroup
	I0804 00:26:58.195446  365167 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0804 00:26:58.195455  365167 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0804 00:26:58.195464  365167 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0804 00:26:58.195475  365167 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0804 00:26:58.195484  365167 command_runner.go:130] > #
	I0804 00:26:58.195492  365167 command_runner.go:130] > # Using the seccomp notifier feature:
	I0804 00:26:58.195501  365167 command_runner.go:130] > #
	I0804 00:26:58.195510  365167 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0804 00:26:58.195522  365167 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0804 00:26:58.195530  365167 command_runner.go:130] > #
	I0804 00:26:58.195542  365167 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0804 00:26:58.195553  365167 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0804 00:26:58.195558  365167 command_runner.go:130] > #
	I0804 00:26:58.195569  365167 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0804 00:26:58.195578  365167 command_runner.go:130] > # feature.
	I0804 00:26:58.195583  365167 command_runner.go:130] > #
	I0804 00:26:58.195601  365167 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0804 00:26:58.195613  365167 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0804 00:26:58.195626  365167 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0804 00:26:58.195638  365167 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0804 00:26:58.195741  365167 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0804 00:26:58.195756  365167 command_runner.go:130] > #
	I0804 00:26:58.195769  365167 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0804 00:26:58.195845  365167 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0804 00:26:58.195865  365167 command_runner.go:130] > #
	I0804 00:26:58.195885  365167 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0804 00:26:58.195906  365167 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0804 00:26:58.195914  365167 command_runner.go:130] > #
	I0804 00:26:58.195927  365167 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0804 00:26:58.195939  365167 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0804 00:26:58.195948  365167 command_runner.go:130] > # limitation.
	I0804 00:26:58.195959  365167 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0804 00:26:58.195969  365167 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0804 00:26:58.195977  365167 command_runner.go:130] > runtime_type = "oci"
	I0804 00:26:58.195987  365167 command_runner.go:130] > runtime_root = "/run/runc"
	I0804 00:26:58.195997  365167 command_runner.go:130] > runtime_config_path = ""
	I0804 00:26:58.196007  365167 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0804 00:26:58.196016  365167 command_runner.go:130] > monitor_cgroup = "pod"
	I0804 00:26:58.196026  365167 command_runner.go:130] > monitor_exec_cgroup = ""
	I0804 00:26:58.196034  365167 command_runner.go:130] > monitor_env = [
	I0804 00:26:58.196042  365167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 00:26:58.196048  365167 command_runner.go:130] > ]
	I0804 00:26:58.196056  365167 command_runner.go:130] > privileged_without_host_devices = false
	I0804 00:26:58.196069  365167 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0804 00:26:58.196081  365167 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0804 00:26:58.196094  365167 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0804 00:26:58.196109  365167 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0804 00:26:58.196125  365167 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0804 00:26:58.196135  365167 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0804 00:26:58.196149  365167 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0804 00:26:58.196165  365167 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0804 00:26:58.196178  365167 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0804 00:26:58.196200  365167 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0804 00:26:58.196209  365167 command_runner.go:130] > # Example:
	I0804 00:26:58.196217  365167 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0804 00:26:58.196224  365167 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0804 00:26:58.196230  365167 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0804 00:26:58.196235  365167 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0804 00:26:58.196241  365167 command_runner.go:130] > # cpuset = 0
	I0804 00:26:58.196247  365167 command_runner.go:130] > # cpushares = "0-1"
	I0804 00:26:58.196252  365167 command_runner.go:130] > # Where:
	I0804 00:26:58.196261  365167 command_runner.go:130] > # The workload name is workload-type.
	I0804 00:26:58.196272  365167 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0804 00:26:58.196280  365167 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0804 00:26:58.196289  365167 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0804 00:26:58.196301  365167 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0804 00:26:58.196310  365167 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0804 00:26:58.196315  365167 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0804 00:26:58.196321  365167 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0804 00:26:58.196328  365167 command_runner.go:130] > # Default value is set to true
	I0804 00:26:58.196335  365167 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0804 00:26:58.196344  365167 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0804 00:26:58.196351  365167 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0804 00:26:58.196358  365167 command_runner.go:130] > # Default value is set to 'false'
	I0804 00:26:58.196365  365167 command_runner.go:130] > # disable_hostport_mapping = false
	I0804 00:26:58.196374  365167 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0804 00:26:58.196378  365167 command_runner.go:130] > #
	I0804 00:26:58.196387  365167 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0804 00:26:58.196396  365167 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0804 00:26:58.196403  365167 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0804 00:26:58.196410  365167 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0804 00:26:58.196418  365167 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0804 00:26:58.196424  365167 command_runner.go:130] > [crio.image]
	I0804 00:26:58.196434  365167 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0804 00:26:58.196450  365167 command_runner.go:130] > # default_transport = "docker://"
	I0804 00:26:58.196462  365167 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0804 00:26:58.196474  365167 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0804 00:26:58.196483  365167 command_runner.go:130] > # global_auth_file = ""
	I0804 00:26:58.196504  365167 command_runner.go:130] > # The image used to instantiate infra containers.
	I0804 00:26:58.196516  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.196524  365167 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0804 00:26:58.196537  365167 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0804 00:26:58.196548  365167 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0804 00:26:58.196560  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.196570  365167 command_runner.go:130] > # pause_image_auth_file = ""
	I0804 00:26:58.196581  365167 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0804 00:26:58.196590  365167 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0804 00:26:58.196602  365167 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0804 00:26:58.196614  365167 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0804 00:26:58.196627  365167 command_runner.go:130] > # pause_command = "/pause"
	I0804 00:26:58.196639  365167 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0804 00:26:58.196651  365167 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0804 00:26:58.196667  365167 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0804 00:26:58.196678  365167 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0804 00:26:58.196687  365167 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0804 00:26:58.196700  365167 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0804 00:26:58.196710  365167 command_runner.go:130] > # pinned_images = [
	I0804 00:26:58.196715  365167 command_runner.go:130] > # ]
	I0804 00:26:58.196728  365167 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0804 00:26:58.196740  365167 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0804 00:26:58.196753  365167 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0804 00:26:58.196765  365167 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0804 00:26:58.196776  365167 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0804 00:26:58.196782  365167 command_runner.go:130] > # signature_policy = ""
	I0804 00:26:58.196790  365167 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0804 00:26:58.196804  365167 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0804 00:26:58.196818  365167 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0804 00:26:58.196830  365167 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0804 00:26:58.196842  365167 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0804 00:26:58.196864  365167 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0804 00:26:58.196873  365167 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0804 00:26:58.196886  365167 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0804 00:26:58.196896  365167 command_runner.go:130] > # changing them here.
	I0804 00:26:58.196903  365167 command_runner.go:130] > # insecure_registries = [
	I0804 00:26:58.196919  365167 command_runner.go:130] > # ]
	I0804 00:26:58.196933  365167 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0804 00:26:58.196947  365167 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0804 00:26:58.196957  365167 command_runner.go:130] > # image_volumes = "mkdir"
	I0804 00:26:58.196967  365167 command_runner.go:130] > # Temporary directory to use for storing big files
	I0804 00:26:58.196975  365167 command_runner.go:130] > # big_files_temporary_dir = ""
	I0804 00:26:58.196985  365167 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0804 00:26:58.196994  365167 command_runner.go:130] > # CNI plugins.
	I0804 00:26:58.197000  365167 command_runner.go:130] > [crio.network]
	I0804 00:26:58.197013  365167 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0804 00:26:58.197025  365167 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0804 00:26:58.197034  365167 command_runner.go:130] > # cni_default_network = ""
	I0804 00:26:58.197043  365167 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0804 00:26:58.197052  365167 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0804 00:26:58.197064  365167 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0804 00:26:58.197073  365167 command_runner.go:130] > # plugin_dirs = [
	I0804 00:26:58.197083  365167 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0804 00:26:58.197088  365167 command_runner.go:130] > # ]
	I0804 00:26:58.197099  365167 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0804 00:26:58.197116  365167 command_runner.go:130] > [crio.metrics]
	I0804 00:26:58.197126  365167 command_runner.go:130] > # Globally enable or disable metrics support.
	I0804 00:26:58.197136  365167 command_runner.go:130] > enable_metrics = true
	I0804 00:26:58.197146  365167 command_runner.go:130] > # Specify enabled metrics collectors.
	I0804 00:26:58.197154  365167 command_runner.go:130] > # Per default all metrics are enabled.
	I0804 00:26:58.197164  365167 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0804 00:26:58.197177  365167 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0804 00:26:58.197189  365167 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0804 00:26:58.197199  365167 command_runner.go:130] > # metrics_collectors = [
	I0804 00:26:58.197208  365167 command_runner.go:130] > # 	"operations",
	I0804 00:26:58.197219  365167 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0804 00:26:58.197235  365167 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0804 00:26:58.197244  365167 command_runner.go:130] > # 	"operations_errors",
	I0804 00:26:58.197253  365167 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0804 00:26:58.197260  365167 command_runner.go:130] > # 	"image_pulls_by_name",
	I0804 00:26:58.197265  365167 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0804 00:26:58.197275  365167 command_runner.go:130] > # 	"image_pulls_failures",
	I0804 00:26:58.197297  365167 command_runner.go:130] > # 	"image_pulls_successes",
	I0804 00:26:58.197307  365167 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0804 00:26:58.197317  365167 command_runner.go:130] > # 	"image_layer_reuse",
	I0804 00:26:58.197327  365167 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0804 00:26:58.197336  365167 command_runner.go:130] > # 	"containers_oom_total",
	I0804 00:26:58.197345  365167 command_runner.go:130] > # 	"containers_oom",
	I0804 00:26:58.197353  365167 command_runner.go:130] > # 	"processes_defunct",
	I0804 00:26:58.197360  365167 command_runner.go:130] > # 	"operations_total",
	I0804 00:26:58.197365  365167 command_runner.go:130] > # 	"operations_latency_seconds",
	I0804 00:26:58.197374  365167 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0804 00:26:58.197384  365167 command_runner.go:130] > # 	"operations_errors_total",
	I0804 00:26:58.197394  365167 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0804 00:26:58.197405  365167 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0804 00:26:58.197415  365167 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0804 00:26:58.197424  365167 command_runner.go:130] > # 	"image_pulls_success_total",
	I0804 00:26:58.197433  365167 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0804 00:26:58.197443  365167 command_runner.go:130] > # 	"containers_oom_count_total",
	I0804 00:26:58.197452  365167 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0804 00:26:58.197461  365167 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0804 00:26:58.197469  365167 command_runner.go:130] > # ]
	I0804 00:26:58.197481  365167 command_runner.go:130] > # The port on which the metrics server will listen.
	I0804 00:26:58.197490  365167 command_runner.go:130] > # metrics_port = 9090
	I0804 00:26:58.197501  365167 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0804 00:26:58.197526  365167 command_runner.go:130] > # metrics_socket = ""
	I0804 00:26:58.197534  365167 command_runner.go:130] > # The certificate for the secure metrics server.
	I0804 00:26:58.197547  365167 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0804 00:26:58.197560  365167 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0804 00:26:58.197570  365167 command_runner.go:130] > # certificate on any modification event.
	I0804 00:26:58.197578  365167 command_runner.go:130] > # metrics_cert = ""
	I0804 00:26:58.197589  365167 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0804 00:26:58.197600  365167 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0804 00:26:58.197612  365167 command_runner.go:130] > # metrics_key = ""
	I0804 00:26:58.197624  365167 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0804 00:26:58.197633  365167 command_runner.go:130] > [crio.tracing]
	I0804 00:26:58.197643  365167 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0804 00:26:58.197653  365167 command_runner.go:130] > # enable_tracing = false
	I0804 00:26:58.197675  365167 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0804 00:26:58.197685  365167 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0804 00:26:58.197699  365167 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0804 00:26:58.197706  365167 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0804 00:26:58.197712  365167 command_runner.go:130] > # CRI-O NRI configuration.
	I0804 00:26:58.197720  365167 command_runner.go:130] > [crio.nri]
	I0804 00:26:58.197730  365167 command_runner.go:130] > # Globally enable or disable NRI.
	I0804 00:26:58.197737  365167 command_runner.go:130] > # enable_nri = false
	I0804 00:26:58.197747  365167 command_runner.go:130] > # NRI socket to listen on.
	I0804 00:26:58.197757  365167 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0804 00:26:58.197766  365167 command_runner.go:130] > # NRI plugin directory to use.
	I0804 00:26:58.197776  365167 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0804 00:26:58.197787  365167 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0804 00:26:58.197796  365167 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0804 00:26:58.197805  365167 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0804 00:26:58.197814  365167 command_runner.go:130] > # nri_disable_connections = false
	I0804 00:26:58.197825  365167 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0804 00:26:58.197835  365167 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0804 00:26:58.197846  365167 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0804 00:26:58.197855  365167 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0804 00:26:58.197868  365167 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0804 00:26:58.197877  365167 command_runner.go:130] > [crio.stats]
	I0804 00:26:58.197888  365167 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0804 00:26:58.197896  365167 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0804 00:26:58.197907  365167 command_runner.go:130] > # stats_collection_period = 0
	I0804 00:26:58.197955  365167 command_runner.go:130] ! time="2024-08-04 00:26:58.157519577Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0804 00:26:58.197979  365167 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0804 00:26:58.198183  365167 cni.go:84] Creating CNI manager for ""
	I0804 00:26:58.198197  365167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 00:26:58.198208  365167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:26:58.198239  365167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-453015 NodeName:multinode-453015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:26:58.198399  365167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-453015"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:26:58.198476  365167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:26:58.209296  365167 command_runner.go:130] > kubeadm
	I0804 00:26:58.209322  365167 command_runner.go:130] > kubectl
	I0804 00:26:58.209326  365167 command_runner.go:130] > kubelet
	I0804 00:26:58.209401  365167 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:26:58.209474  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:26:58.220800  365167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0804 00:26:58.239579  365167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:26:58.258214  365167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0804 00:26:58.277337  365167 ssh_runner.go:195] Run: grep 192.168.39.23	control-plane.minikube.internal$ /etc/hosts
	I0804 00:26:58.281681  365167 command_runner.go:130] > 192.168.39.23	control-plane.minikube.internal
	I0804 00:26:58.281778  365167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:26:58.423149  365167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:26:58.438690  365167 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015 for IP: 192.168.39.23
	I0804 00:26:58.438724  365167 certs.go:194] generating shared ca certs ...
	I0804 00:26:58.438746  365167 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:26:58.438944  365167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0804 00:26:58.438986  365167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0804 00:26:58.438997  365167 certs.go:256] generating profile certs ...
	I0804 00:26:58.439074  365167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/client.key
	I0804 00:26:58.439132  365167 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key.c0875c15
	I0804 00:26:58.439186  365167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key
	I0804 00:26:58.439197  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 00:26:58.439212  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 00:26:58.439225  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 00:26:58.439237  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 00:26:58.439250  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 00:26:58.439262  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 00:26:58.439275  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 00:26:58.439287  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 00:26:58.439342  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0804 00:26:58.439371  365167 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0804 00:26:58.439380  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:26:58.439402  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0804 00:26:58.439424  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:26:58.439448  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0804 00:26:58.439483  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:26:58.439509  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.439536  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.439562  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.440183  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:26:58.465710  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:26:58.491165  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:26:58.515645  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:26:58.541229  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:26:58.566348  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:26:58.591492  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:26:58.618106  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:26:58.643577  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:26:58.668184  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0804 00:26:58.692856  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0804 00:26:58.716363  365167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:26:58.733961  365167 ssh_runner.go:195] Run: openssl version
	I0804 00:26:58.739951  365167 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0804 00:26:58.740202  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0804 00:26:58.752457  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757728  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757785  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757856  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.764273  365167 command_runner.go:130] > 3ec20f2e
	I0804 00:26:58.764379  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:26:58.775032  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:26:58.787381  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792162  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792386  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792454  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.798252  365167 command_runner.go:130] > b5213941
	I0804 00:26:58.798443  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:26:58.808561  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0804 00:26:58.820328  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825219  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825256  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825305  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.830984  365167 command_runner.go:130] > 51391683
	I0804 00:26:58.831057  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0804 00:26:58.840853  365167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:26:58.845478  365167 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:26:58.845523  365167 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 00:26:58.845533  365167 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0804 00:26:58.845542  365167 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 00:26:58.845551  365167 command_runner.go:130] > Access: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845558  365167 command_runner.go:130] > Modify: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845565  365167 command_runner.go:130] > Change: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845574  365167 command_runner.go:130] >  Birth: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845648  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:26:58.851561  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.851668  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:26:58.857461  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.857589  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:26:58.863107  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.863310  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:26:58.869319  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.869530  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:26:58.875571  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.875656  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:26:58.881512  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.881678  365167 kubeadm.go:392] StartCluster: {Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:26:58.881826  365167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:26:58.881886  365167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:26:58.926720  365167 command_runner.go:130] > 8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8
	I0804 00:26:58.926753  365167 command_runner.go:130] > 51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6
	I0804 00:26:58.926785  365167 command_runner.go:130] > eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b
	I0804 00:26:58.926881  365167 command_runner.go:130] > f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647
	I0804 00:26:58.926901  365167 command_runner.go:130] > 1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f
	I0804 00:26:58.926952  365167 command_runner.go:130] > 1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924
	I0804 00:26:58.927034  365167 command_runner.go:130] > 36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316
	I0804 00:26:58.927102  365167 command_runner.go:130] > d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4
	I0804 00:26:58.928731  365167 cri.go:89] found id: "8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8"
	I0804 00:26:58.928743  365167 cri.go:89] found id: "51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6"
	I0804 00:26:58.928747  365167 cri.go:89] found id: "eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b"
	I0804 00:26:58.928750  365167 cri.go:89] found id: "f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647"
	I0804 00:26:58.928752  365167 cri.go:89] found id: "1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f"
	I0804 00:26:58.928756  365167 cri.go:89] found id: "1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924"
	I0804 00:26:58.928758  365167 cri.go:89] found id: "36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316"
	I0804 00:26:58.928761  365167 cri.go:89] found id: "d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4"
	I0804 00:26:58.928763  365167 cri.go:89] found id: ""
	I0804 00:26:58.928813  365167 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.272591677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319272569266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b700eace-490e-4a68-97bf-bbc0a6c92e56 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.273460830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b70dee5-6ccd-49ca-8261-a6e241a582cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.273546752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b70dee5-6ccd-49ca-8261-a6e241a582cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.274071982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b70dee5-6ccd-49ca-8261-a6e241a582cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.316431648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f263b89-d52e-41b1-af07-86185a359388 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.316520747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f263b89-d52e-41b1-af07-86185a359388 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.317624527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e30bf54c-946b-48be-b7b3-a0c534d9cf33 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.318184334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319318159158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e30bf54c-946b-48be-b7b3-a0c534d9cf33 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.318702621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48655ea7-8cb3-49d3-8e23-cda1d6ab08c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.318755031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48655ea7-8cb3-49d3-8e23-cda1d6ab08c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.319168486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48655ea7-8cb3-49d3-8e23-cda1d6ab08c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.364682633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9792a3ca-101f-4ed8-b17a-55f4c7a5e086 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.364754391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9792a3ca-101f-4ed8-b17a-55f4c7a5e086 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.366192575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd6ac3ea-f7a0-4c80-9fbb-b73bd3e12288 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.366713952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319366689632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd6ac3ea-f7a0-4c80-9fbb-b73bd3e12288 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.367447233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31e07508-761f-4f7f-a976-d87ae365606e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.367502417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31e07508-761f-4f7f-a976-d87ae365606e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.367828950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31e07508-761f-4f7f-a976-d87ae365606e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.410350402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e22a784b-75e2-4646-b890-f5a129c1df60 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.410439912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e22a784b-75e2-4646-b890-f5a129c1df60 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.416385829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a5559a9-bc21-45e9-9005-8cd4d0db81dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.416823651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731319416800727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a5559a9-bc21-45e9-9005-8cd4d0db81dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.419131043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91c391a2-4427-42bf-948b-4ef1efdcf1e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.419211517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91c391a2-4427-42bf-948b-4ef1efdcf1e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:28:39 multinode-453015 crio[2932]: time="2024-08-04 00:28:39.419577306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91c391a2-4427-42bf-948b-4ef1efdcf1e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ad93dd5bfe2bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   070ea9b2f8e0d       busybox-fc5497c4f-qcrhw
	7bf76841f45db       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   f4558b5067a4d       kindnet-d625q
	653213f7e1cf1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   f47797dc911e4       coredns-7db6d8ff4d-lpfg4
	2a1985964e07f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   a7384d1ee21ae       kube-proxy-btrgw
	806d9ae2d62a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   7dc8455275a38       storage-provisioner
	4d13c5d382f86       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   b96875d41283a       kube-scheduler-multinode-453015
	8190bba136cd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   2f022f70fbb31       etcd-multinode-453015
	3e4e62d81102f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   8306bbe24a830       kube-controller-manager-multinode-453015
	a9d532c5d501a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   3630e4bedd9cd       kube-apiserver-multinode-453015
	87db5415d7d25       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   b891fc765f039       busybox-fc5497c4f-qcrhw
	8fe03d194cc67       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   6d01779211309       coredns-7db6d8ff4d-lpfg4
	51ba132c389aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c27297ab98f11       storage-provisioner
	eda8d348bfe19       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   db0708b20133d       kindnet-d625q
	f07ab5f5f0ce9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   ac74041b4c4c9       kube-proxy-btrgw
	1a43870f80eb8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   256f10b9683ff       kube-controller-manager-multinode-453015
	1b93a7722a9db       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   8806a9012b8be       kube-apiserver-multinode-453015
	36489d3306cf4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   be21249f1c8f1       kube-scheduler-multinode-453015
	d9ce68ffecfd6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   9675da07c9770       etcd-multinode-453015
	
	
	==> coredns [653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38783 - 59773 "HINFO IN 6596251184696092188.2619414782798662992. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011365219s
	
	
	==> coredns [8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8] <==
	[INFO] 10.244.0.3:39132 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875015s
	[INFO] 10.244.0.3:35645 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171033s
	[INFO] 10.244.0.3:40695 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067736s
	[INFO] 10.244.0.3:51823 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090872s
	[INFO] 10.244.0.3:53728 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064276s
	[INFO] 10.244.0.3:59214 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059662s
	[INFO] 10.244.0.3:53129 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057601s
	[INFO] 10.244.1.2:36858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152867s
	[INFO] 10.244.1.2:40557 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116851s
	[INFO] 10.244.1.2:33555 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109306s
	[INFO] 10.244.1.2:52895 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101585s
	[INFO] 10.244.0.3:52189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119404s
	[INFO] 10.244.0.3:58821 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072508s
	[INFO] 10.244.0.3:36432 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075175s
	[INFO] 10.244.0.3:57532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059597s
	[INFO] 10.244.1.2:57872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145424s
	[INFO] 10.244.1.2:59714 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187549s
	[INFO] 10.244.1.2:43975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134909s
	[INFO] 10.244.1.2:37267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102985s
	[INFO] 10.244.0.3:35906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071029s
	[INFO] 10.244.0.3:60307 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000044812s
	[INFO] 10.244.0.3:57522 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000036503s
	[INFO] 10.244.0.3:58416 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000028081s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-453015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-453015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=multinode-453015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:20:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-453015
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:28:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    multinode-453015
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74f961df4acb48229bf5c18464bc6732
	  System UUID:                74f961df-4acb-4822-9bf5-c18464bc6732
	  Boot ID:                    1d91e3d4-a1b5-4f22-a4a2-ffec1ee4cea0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qcrhw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m56s
	  kube-system                 coredns-7db6d8ff4d-lpfg4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m2s
	  kube-system                 etcd-multinode-453015                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-d625q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-multinode-453015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-multinode-453015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-proxy-btrgw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-multinode-453015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m1s                   kube-proxy       
	  Normal  Starting                 93s                    kube-proxy       
	  Normal  Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m15s                  kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m3s                   node-controller  Node multinode-453015 event: Registered Node multinode-453015 in Controller
	  Normal  NodeReady                7m49s                  kubelet          Node multinode-453015 status is now: NodeReady
	  Normal  Starting                 99s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)      kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)      kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)      kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           82s                    node-controller  Node multinode-453015 event: Registered Node multinode-453015 in Controller
	
	
	Name:               multinode-453015-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-453015-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=multinode-453015
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_27_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:27:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-453015-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:28:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:27:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-453015-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6b4a354da8f4026b4c64afd1e29c6b4
	  System UUID:                c6b4a354-da8f-4026-b4c6-4afd1e29c6b4
	  Boot ID:                    88bcfe57-a24e-499a-8124-bdd0de124495
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9vxzv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-vlcff              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m17s
	  kube-system                 kube-proxy-ppqhx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m12s                  kube-proxy  
	  Normal  Starting                 53s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m17s (x2 over 7m17s)  kubelet     Node multinode-453015-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x2 over 7m17s)  kubelet     Node multinode-453015-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s (x2 over 7m17s)  kubelet     Node multinode-453015-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m59s                  kubelet     Node multinode-453015-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-453015-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-453015-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-453015-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-453015-m02 status is now: NodeReady
	
	
	Name:               multinode-453015-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-453015-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=multinode-453015
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_28_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:28:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-453015-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:28:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:28:36 +0000   Sun, 04 Aug 2024 00:28:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:28:36 +0000   Sun, 04 Aug 2024 00:28:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:28:36 +0000   Sun, 04 Aug 2024 00:28:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:28:36 +0000   Sun, 04 Aug 2024 00:28:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-453015-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fba6420631e0496aa0e38990457ebd67
	  System UUID:                fba64206-31e0-496a-a0e3-8990457ebd67
	  Boot ID:                    32b9ea60-fa9a-4613-9fcb-38f55c0f9d33
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-sg5st       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-proxy-j96j8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m21s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m33s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m26s (x2 over 6m26s)  kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x2 over 6m26s)  kubelet     Node multinode-453015-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x2 over 6m26s)  kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m7s                   kubelet     Node multinode-453015-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m37s (x2 over 5m37s)  kubelet     Node multinode-453015-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x2 over 5m37s)  kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m37s (x2 over 5m37s)  kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m37s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m20s                  kubelet     Node multinode-453015-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-453015-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-453015-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-453015-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.074250] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183240] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.151024] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.284642] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.328926] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.062936] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.563772] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.433412] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.104143] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.103279] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.194416] systemd-fstab-generator[1465]: Ignoring "noauto" option for root device
	[  +0.129242] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.261052] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 4 00:21] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 4 00:26] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.164419] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.185134] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
	[  +0.149312] systemd-fstab-generator[2824]: Ignoring "noauto" option for root device
	[  +0.397616] systemd-fstab-generator[2917]: Ignoring "noauto" option for root device
	[  +0.776340] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +2.223066] systemd-fstab-generator[3156]: Ignoring "noauto" option for root device
	[Aug 4 00:27] kauditd_printk_skb: 189 callbacks suppressed
	[ +11.846102] systemd-fstab-generator[3973]: Ignoring "noauto" option for root device
	[  +0.108228] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.856768] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a] <==
	{"level":"info","ts":"2024-08-04T00:27:01.922118Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","added-peer-id":"c6baa4636f442c95","added-peer-peer-urls":["https://192.168.39.23:2380"]}
	{"level":"info","ts":"2024-08-04T00:27:01.922278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:27:01.922319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:27:01.923323Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:27:01.923525Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c6baa4636f442c95","initial-advertise-peer-urls":["https://192.168.39.23:2380"],"listen-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:27:01.923544Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:27:01.923607Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:27:01.923613Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:27:03.073124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.073227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.073295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 received MsgPreVoteResp from c6baa4636f442c95 at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.07333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 received MsgVoteResp from c6baa4636f442c95 at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c6baa4636f442c95 elected leader c6baa4636f442c95 at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.078841Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c6baa4636f442c95","local-member-attributes":"{Name:multinode-453015 ClientURLs:[https://192.168.39.23:2379]}","request-path":"/0/members/c6baa4636f442c95/attributes","cluster-id":"7d4cc2b8d7236707","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:27:03.079068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:27:03.07917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:27:03.08111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:27:03.08306Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:27:03.083095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:27:03.084612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.23:2379"}
	{"level":"warn","ts":"2024-08-04T00:28:22.913759Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.486925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:28:22.913972Z","caller":"traceutil/trace.go:171","msg":"trace[1032190154] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1162; }","duration":"110.779546ms","start":"2024-08-04T00:28:22.803175Z","end":"2024-08-04T00:28:22.913954Z","steps":["trace[1032190154] 'range keys from in-memory index tree'  (duration: 110.472017ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:28:22.913827Z","caller":"traceutil/trace.go:171","msg":"trace[1055889212] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"218.817263ms","start":"2024-08-04T00:28:22.694933Z","end":"2024-08-04T00:28:22.913751Z","steps":["trace[1055889212] 'process raft request'  (duration: 213.60492ms)"],"step_count":1}
	
	
	==> etcd [d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4] <==
	{"level":"info","ts":"2024-08-04T00:20:18.999545Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:19.008098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008199Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008222Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:19.008299Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:19.017759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:20:19.044397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.23:2379"}
	{"level":"info","ts":"2024-08-04T00:21:22.804216Z","caller":"traceutil/trace.go:171","msg":"trace[324967981] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"116.105416ms","start":"2024-08-04T00:21:22.688098Z","end":"2024-08-04T00:21:22.804204Z","steps":["trace[324967981] 'process raft request'  (duration: 115.830866ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:21:22.80417Z","caller":"traceutil/trace.go:171","msg":"trace[782838819] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"154.937041ms","start":"2024-08-04T00:21:22.649198Z","end":"2024-08-04T00:21:22.804135Z","steps":["trace[782838819] 'process raft request'  (duration: 120.45226ms)","trace[782838819] 'compare'  (duration: 34.102652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:22:13.95847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.666068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-453015-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:22:13.95932Z","caller":"traceutil/trace.go:171","msg":"trace[158503331] range","detail":"{range_begin:/registry/minions/multinode-453015-m03; range_end:; response_count:0; response_revision:620; }","duration":"179.548116ms","start":"2024-08-04T00:22:13.779695Z","end":"2024-08-04T00:22:13.959244Z","steps":["trace[158503331] 'range keys from in-memory index tree'  (duration: 178.603739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:22:13.958471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.636032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-453015-m03.17e85ea820fffed1\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-08-04T00:22:13.962835Z","caller":"traceutil/trace.go:171","msg":"trace[1635718525] range","detail":"{range_begin:/registry/events/default/multinode-453015-m03.17e85ea820fffed1; range_end:; response_count:1; response_revision:620; }","duration":"174.071127ms","start":"2024-08-04T00:22:13.788742Z","end":"2024-08-04T00:22:13.962814Z","steps":["trace[1635718525] 'range keys from in-memory index tree'  (duration: 169.530423ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:22:13.958778Z","caller":"traceutil/trace.go:171","msg":"trace[633275549] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"169.908515ms","start":"2024-08-04T00:22:13.788835Z","end":"2024-08-04T00:22:13.958743Z","steps":["trace[633275549] 'process raft request'  (duration: 169.393366ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:25:25.402776Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:25:25.402969Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-453015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	{"level":"warn","ts":"2024-08-04T00:25:25.403164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.403327Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.500901Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.500978Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:25:25.501121Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c6baa4636f442c95","current-leader-member-id":"c6baa4636f442c95"}
	{"level":"info","ts":"2024-08-04T00:25:25.504132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:25:25.504574Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:25:25.504666Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-453015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	
	
	==> kernel <==
	 00:28:39 up 8 min,  0 users,  load average: 0.44, 0.22, 0.12
	Linux multinode-453015 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d] <==
	I0804 00:27:56.518339       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:28:06.516112       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:28:06.516302       1 main.go:299] handling current node
	I0804 00:28:06.516392       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:28:06.516434       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:28:06.516669       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:28:06.516771       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:28:16.518369       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:28:16.518484       1 main.go:299] handling current node
	I0804 00:28:16.518499       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:28:16.518505       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:28:16.518653       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:28:16.518680       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:28:26.518068       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:28:26.518207       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.2.0/24] 
	I0804 00:28:26.518398       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:28:26.518440       1 main.go:299] handling current node
	I0804 00:28:26.518468       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:28:26.518485       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:28:36.515380       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:28:36.515585       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:28:36.515787       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:28:36.515841       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.2.0/24] 
	I0804 00:28:36.515955       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:28:36.515987       1 main.go:299] handling current node
	
	
	==> kindnet [eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b] <==
	I0804 00:24:40.602263       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:24:50.603811       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:24:50.603923       1 main.go:299] handling current node
	I0804 00:24:50.603954       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:24:50.603973       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:24:50.604195       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:24:50.604283       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:00.601776       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:00.601806       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:25:00.601952       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:00.601957       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:00.602077       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:00.602086       1 main.go:299] handling current node
	I0804 00:25:10.611142       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:10.611350       1 main.go:299] handling current node
	I0804 00:25:10.611392       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:10.611413       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:25:10.611563       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:10.611584       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:20.607148       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:20.607288       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:20.607456       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:20.607480       1 main.go:299] handling current node
	I0804 00:25:20.607502       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:20.607517       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924] <==
	E0804 00:21:45.871122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51768: use of closed network connection
	E0804 00:21:46.073379       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51784: use of closed network connection
	E0804 00:21:46.250334       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51802: use of closed network connection
	E0804 00:21:46.457977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51806: use of closed network connection
	E0804 00:21:46.629004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51818: use of closed network connection
	E0804 00:21:46.796802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51842: use of closed network connection
	E0804 00:21:47.087743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51878: use of closed network connection
	E0804 00:21:47.255307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51894: use of closed network connection
	E0804 00:21:47.427272       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51914: use of closed network connection
	E0804 00:21:47.590829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51946: use of closed network connection
	I0804 00:25:25.405097       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0804 00:25:25.424205       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424308       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424349       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424415       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424457       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434737       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434852       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434924       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434979       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435118       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435179       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435225       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.436462       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0804 00:25:25.436933       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3] <==
	I0804 00:27:04.603653       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:27:04.605473       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:27:04.605541       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:27:04.618127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:27:04.619751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:27:04.620449       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:27:04.620517       1 policy_source.go:224] refreshing policies
	I0804 00:27:04.632604       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:27:04.644705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 00:27:04.646411       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:27:04.651680       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:27:04.651919       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:27:04.651984       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:27:04.652061       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:27:04.652115       1 cache.go:39] Caches are synced for autoregister controller
	E0804 00:27:04.661992       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 00:27:04.709958       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:27:05.510812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:27:06.772235       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:27:06.903337       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:27:06.915614       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:27:06.992534       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:27:07.005585       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:27:17.622314       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:27:17.822814       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f] <==
	I0804 00:20:52.234607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="208.645µs"
	I0804 00:21:22.807068       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m02\" does not exist"
	I0804 00:21:22.859637       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m02" podCIDRs=["10.244.1.0/24"]
	I0804 00:21:26.553400       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-453015-m02"
	I0804 00:21:40.961288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:21:43.212415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.685389ms"
	I0804 00:21:43.237304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.759597ms"
	I0804 00:21:43.239345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.971µs"
	I0804 00:21:44.739157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.29979ms"
	I0804 00:21:44.739239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.881µs"
	I0804 00:21:45.414642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.114615ms"
	I0804 00:21:45.415340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.33µs"
	I0804 00:22:13.960226       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:22:13.961844       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:22:13.972404       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.2.0/24"]
	I0804 00:22:16.572154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-453015-m03"
	I0804 00:22:32.574007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:01.459774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:02.670768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:23:02.670829       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:02.696694       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.3.0/24"]
	I0804 00:23:19.772486       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:56.628991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m03"
	I0804 00:23:56.686486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.99642ms"
	I0804 00:23:56.686584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.145µs"
	
	
	==> kube-controller-manager [3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c] <==
	I0804 00:27:18.191330       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:27:18.208684       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:27:37.300146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.904226ms"
	I0804 00:27:37.312377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.167608ms"
	I0804 00:27:37.324071       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.648232ms"
	I0804 00:27:37.324167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.796µs"
	I0804 00:27:41.583866       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m02\" does not exist"
	I0804 00:27:41.606796       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m02" podCIDRs=["10.244.1.0/24"]
	I0804 00:27:43.476596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.172µs"
	I0804 00:27:43.521449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.013µs"
	I0804 00:27:43.530776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.416µs"
	I0804 00:27:43.556960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.123µs"
	I0804 00:27:43.566118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.697µs"
	I0804 00:27:43.571422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.557µs"
	I0804 00:27:47.635301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.907µs"
	I0804 00:27:59.334216       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:27:59.356772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.334µs"
	I0804 00:27:59.374382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.248µs"
	I0804 00:28:01.111762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.929509ms"
	I0804 00:28:01.111847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.198µs"
	I0804 00:28:17.613895       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:28:18.624343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:28:18.625007       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:28:18.644761       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.2.0/24"]
	I0804 00:28:36.376717       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m03"
	
	
	==> kube-proxy [2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c] <==
	I0804 00:27:05.686052       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:27:05.698687       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.23"]
	I0804 00:27:05.790978       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:27:05.791076       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:27:05.791095       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:27:05.795823       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:27:05.796121       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:27:05.796153       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:27:05.797761       1 config.go:192] "Starting service config controller"
	I0804 00:27:05.797834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:27:05.797881       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:27:05.797885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:27:05.798528       1 config.go:319] "Starting node config controller"
	I0804 00:27:05.798561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:27:05.898507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:27:05.898590       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:27:05.898599       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647] <==
	I0804 00:20:38.357556       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:20:38.372719       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.23"]
	I0804 00:20:38.429531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:20:38.429631       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:20:38.429650       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:20:38.433363       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:20:38.433927       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:20:38.433946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:20:38.435792       1 config.go:192] "Starting service config controller"
	I0804 00:20:38.436167       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:20:38.436237       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:20:38.436257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:20:38.438489       1 config.go:319] "Starting node config controller"
	I0804 00:20:38.439219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:20:38.536756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:20:38.536834       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:20:38.539306       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316] <==
	E0804 00:20:22.033738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.053531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:20:22.053586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 00:20:22.118499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:20:22.118641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 00:20:22.125971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:22.126000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 00:20:22.185310       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:22.185338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.198474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:20:22.198564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 00:20:22.211709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:22.211830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.239749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:20:22.239890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 00:20:22.312496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 00:20:22.312608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 00:20:22.318881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:20:22.319088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 00:20:22.332339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:20:22.332479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 00:20:22.495197       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:20:22.495290       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 00:20:25.119585       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:25:25.408253       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf] <==
	I0804 00:27:03.066333       1 serving.go:380] Generated self-signed cert in-memory
	I0804 00:27:04.655527       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:27:04.655608       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:27:04.670069       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0804 00:27:04.672133       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0804 00:27:04.672276       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:27:04.672356       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:27:04.672390       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0804 00:27:04.672450       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0804 00:27:04.673599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:27:04.673536       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:27:04.773913       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0804 00:27:04.775359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:27:04.775970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:27:01 multinode-453015 kubelet[3163]: W0804 00:27:01.671662    3163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	Aug 04 00:27:01 multinode-453015 kubelet[3163]: E0804 00:27:01.671727    3163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	Aug 04 00:27:02 multinode-453015 kubelet[3163]: I0804 00:27:02.308348    3163 kubelet_node_status.go:73] "Attempting to register node" node="multinode-453015"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.676524    3163 kubelet_node_status.go:112] "Node was previously registered" node="multinode-453015"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.676643    3163 kubelet_node_status.go:76] "Successfully registered node" node="multinode-453015"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.678774    3163 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.679975    3163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.768479    3163 apiserver.go:52] "Watching apiserver"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.771947    3163 topology_manager.go:215] "Topology Admit Handler" podUID="5a373aab-548c-491b-9ff3-7d33fc97e7e5" podNamespace="kube-system" podName="kube-proxy-btrgw"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.772185    3163 topology_manager.go:215] "Topology Admit Handler" podUID="6b281006-ce73-4b6a-9592-1df16b7ae140" podNamespace="kube-system" podName="kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.772461    3163 topology_manager.go:215] "Topology Admit Handler" podUID="89831564-c2d0-4b22-8d93-dfd59ee56c9d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lpfg4"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.772620    3163 topology_manager.go:215] "Topology Admit Handler" podUID="8670b908-2c0e-4996-a2f9-32a57683749e" podNamespace="kube-system" podName="storage-provisioner"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.772698    3163 topology_manager.go:215] "Topology Admit Handler" podUID="2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0" podNamespace="default" podName="busybox-fc5497c4f-qcrhw"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.787413    3163 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810086    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a373aab-548c-491b-9ff3-7d33fc97e7e5-lib-modules\") pod \"kube-proxy-btrgw\" (UID: \"5a373aab-548c-491b-9ff3-7d33fc97e7e5\") " pod="kube-system/kube-proxy-btrgw"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810343    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-xtables-lock\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810512    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-cni-cfg\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810599    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-lib-modules\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810699    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8670b908-2c0e-4996-a2f9-32a57683749e-tmp\") pod \"storage-provisioner\" (UID: \"8670b908-2c0e-4996-a2f9-32a57683749e\") " pod="kube-system/storage-provisioner"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810916    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a373aab-548c-491b-9ff3-7d33fc97e7e5-xtables-lock\") pod \"kube-proxy-btrgw\" (UID: \"5a373aab-548c-491b-9ff3-7d33fc97e7e5\") " pod="kube-system/kube-proxy-btrgw"
	Aug 04 00:28:00 multinode-453015 kubelet[3163]: E0804 00:28:00.849334    3163 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:28:38.967884  366218 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-323890/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-453015 -n multinode-453015
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-453015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (318.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453015 stop: exit status 82 (2m0.472877107s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-453015-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-453015 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453015 status: exit status 3 (18.825188373s)

                                                
                                                
-- stdout --
	multinode-453015
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453015-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:31:02.413873  366874 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	E0804 00:31:02.413926  366874 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-453015 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-453015 -n multinode-453015
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-453015 logs -n 25: (1.501376281s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015:/home/docker/cp-test_multinode-453015-m02_multinode-453015.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015 sudo cat                                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m02_multinode-453015.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03:/home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015-m03 sudo cat                                   | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp testdata/cp-test.txt                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015:/home/docker/cp-test_multinode-453015-m03_multinode-453015.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015 sudo cat                                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m03_multinode-453015.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt                       | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m02:/home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n                                                                 | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | multinode-453015-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-453015 ssh -n multinode-453015-m02 sudo cat                                   | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	|         | /home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-453015 node stop m03                                                          | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:22 UTC |
	| node    | multinode-453015 node start                                                             | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:22 UTC | 04 Aug 24 00:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-453015                                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:23 UTC |                     |
	| stop    | -p multinode-453015                                                                     | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:23 UTC |                     |
	| start   | -p multinode-453015                                                                     | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:25 UTC | 04 Aug 24 00:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-453015                                                                | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:28 UTC |                     |
	| node    | multinode-453015 node delete                                                            | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:28 UTC | 04 Aug 24 00:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-453015 stop                                                                   | multinode-453015 | jenkins | v1.33.1 | 04 Aug 24 00:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:25:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:25:24.496751  365167 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:25:24.497014  365167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:25:24.497024  365167 out.go:304] Setting ErrFile to fd 2...
	I0804 00:25:24.497028  365167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:25:24.497222  365167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:25:24.497853  365167 out.go:298] Setting JSON to false
	I0804 00:25:24.498850  365167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":32872,"bootTime":1722698252,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:25:24.498939  365167 start.go:139] virtualization: kvm guest
	I0804 00:25:24.501166  365167 out.go:177] * [multinode-453015] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:25:24.502757  365167 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:25:24.502759  365167 notify.go:220] Checking for updates...
	I0804 00:25:24.504047  365167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:25:24.505361  365167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:25:24.506570  365167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:25:24.507779  365167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:25:24.509099  365167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:25:24.510779  365167 config.go:182] Loaded profile config "multinode-453015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:25:24.510876  365167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:25:24.511344  365167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:25:24.511399  365167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:25:24.527810  365167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0804 00:25:24.528277  365167 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:25:24.528968  365167 main.go:141] libmachine: Using API Version  1
	I0804 00:25:24.529003  365167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:25:24.529385  365167 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:25:24.529623  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.566602  365167 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:25:24.567934  365167 start.go:297] selected driver: kvm2
	I0804 00:25:24.567947  365167 start.go:901] validating driver "kvm2" against &{Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:25:24.568099  365167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:25:24.568408  365167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:25:24.568474  365167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:25:24.584291  365167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:25:24.585309  365167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:25:24.585387  365167 cni.go:84] Creating CNI manager for ""
	I0804 00:25:24.585404  365167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 00:25:24.585492  365167 start.go:340] cluster config:
	{Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:25:24.585720  365167 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:25:24.588184  365167 out.go:177] * Starting "multinode-453015" primary control-plane node in "multinode-453015" cluster
	I0804 00:25:24.589521  365167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:25:24.589567  365167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:25:24.589579  365167 cache.go:56] Caching tarball of preloaded images
	I0804 00:25:24.589670  365167 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:25:24.589693  365167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:25:24.589824  365167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/config.json ...
	I0804 00:25:24.590103  365167 start.go:360] acquireMachinesLock for multinode-453015: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:25:24.590166  365167 start.go:364] duration metric: took 37.785µs to acquireMachinesLock for "multinode-453015"
	I0804 00:25:24.590189  365167 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:25:24.590200  365167 fix.go:54] fixHost starting: 
	I0804 00:25:24.590469  365167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:25:24.590509  365167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:25:24.605373  365167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0804 00:25:24.605846  365167 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:25:24.606474  365167 main.go:141] libmachine: Using API Version  1
	I0804 00:25:24.606499  365167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:25:24.606900  365167 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:25:24.607143  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.607334  365167 main.go:141] libmachine: (multinode-453015) Calling .GetState
	I0804 00:25:24.609029  365167 fix.go:112] recreateIfNeeded on multinode-453015: state=Running err=<nil>
	W0804 00:25:24.609048  365167 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:25:24.611168  365167 out.go:177] * Updating the running kvm2 "multinode-453015" VM ...
	I0804 00:25:24.612577  365167 machine.go:94] provisionDockerMachine start ...
	I0804 00:25:24.612611  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:25:24.612888  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.615359  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.615914  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.615946  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.616119  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.616302  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.616461  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.616594  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.616757  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.616974  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.616984  365167 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:25:24.727400  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453015
	
	I0804 00:25:24.727436  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.727677  365167 buildroot.go:166] provisioning hostname "multinode-453015"
	I0804 00:25:24.727709  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.727919  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.730722  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.731106  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.731147  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.731266  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.731451  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.731608  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.731916  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.732109  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.732287  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.732302  365167 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-453015 && echo "multinode-453015" | sudo tee /etc/hostname
	I0804 00:25:24.854275  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-453015
	
	I0804 00:25:24.854302  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.857285  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.857688  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.857730  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.857912  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:24.858128  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.858307  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:24.858446  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:24.858600  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:24.858781  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:24.858796  365167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-453015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-453015/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-453015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:25:24.966976  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:25:24.967022  365167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0804 00:25:24.967093  365167 buildroot.go:174] setting up certificates
	I0804 00:25:24.967106  365167 provision.go:84] configureAuth start
	I0804 00:25:24.967122  365167 main.go:141] libmachine: (multinode-453015) Calling .GetMachineName
	I0804 00:25:24.967565  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:25:24.970191  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.970550  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.970583  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.970747  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:24.973054  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.973427  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:24.973457  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:24.973584  365167 provision.go:143] copyHostCerts
	I0804 00:25:24.973614  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:25:24.973667  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0804 00:25:24.973678  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:25:24.973758  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0804 00:25:24.973882  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:25:24.973915  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0804 00:25:24.973925  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:25:24.973969  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0804 00:25:24.974032  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:25:24.974057  365167 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0804 00:25:24.974066  365167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:25:24.974101  365167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0804 00:25:24.974164  365167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.multinode-453015 san=[127.0.0.1 192.168.39.23 localhost minikube multinode-453015]
	I0804 00:25:25.081152  365167 provision.go:177] copyRemoteCerts
	I0804 00:25:25.081295  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:25:25.081332  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:25.084040  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.084422  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:25.084450  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.084626  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:25.084828  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.085020  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:25.085157  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:25:25.173662  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 00:25:25.173743  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0804 00:25:25.202657  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 00:25:25.202753  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:25:25.245307  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 00:25:25.245380  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0804 00:25:25.274556  365167 provision.go:87] duration metric: took 307.435631ms to configureAuth
	I0804 00:25:25.274592  365167 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:25:25.274839  365167 config.go:182] Loaded profile config "multinode-453015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:25:25.274916  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:25:25.277593  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.277933  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:25:25.277976  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:25:25.278136  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:25:25.278318  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.278490  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:25:25.278607  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:25:25.278779  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:25:25.278956  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:25:25.278970  365167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:26:56.000723  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:26:56.000759  365167 machine.go:97] duration metric: took 1m31.388160451s to provisionDockerMachine
	I0804 00:26:56.000774  365167 start.go:293] postStartSetup for "multinode-453015" (driver="kvm2")
	I0804 00:26:56.000785  365167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:26:56.000805  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.001219  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:26:56.001283  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.004882  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.005311  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.005343  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.005456  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.005712  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.005917  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.006067  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.093811  365167 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:26:56.098373  365167 command_runner.go:130] > NAME=Buildroot
	I0804 00:26:56.098402  365167 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0804 00:26:56.098408  365167 command_runner.go:130] > ID=buildroot
	I0804 00:26:56.098429  365167 command_runner.go:130] > VERSION_ID=2023.02.9
	I0804 00:26:56.098436  365167 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0804 00:26:56.098483  365167 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:26:56.098502  365167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0804 00:26:56.098593  365167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0804 00:26:56.098693  365167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0804 00:26:56.098709  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /etc/ssl/certs/3310972.pem
	I0804 00:26:56.098798  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:26:56.109679  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:26:56.136006  365167 start.go:296] duration metric: took 135.215306ms for postStartSetup
	I0804 00:26:56.136054  365167 fix.go:56] duration metric: took 1m31.545854903s for fixHost
	I0804 00:26:56.136088  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.138687  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.139216  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.139244  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.139412  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.139650  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.139839  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.139998  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.140152  365167 main.go:141] libmachine: Using SSH client type: native
	I0804 00:26:56.140388  365167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0804 00:26:56.140403  365167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:26:56.246368  365167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722731216.226438620
	
	I0804 00:26:56.246400  365167 fix.go:216] guest clock: 1722731216.226438620
	I0804 00:26:56.246411  365167 fix.go:229] Guest: 2024-08-04 00:26:56.22643862 +0000 UTC Remote: 2024-08-04 00:26:56.136067969 +0000 UTC m=+91.677694190 (delta=90.370651ms)
	I0804 00:26:56.246440  365167 fix.go:200] guest clock delta is within tolerance: 90.370651ms
	I0804 00:26:56.246447  365167 start.go:83] releasing machines lock for "multinode-453015", held for 1m31.656268618s
	I0804 00:26:56.246508  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.246823  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:26:56.249371  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.249807  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.249830  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.249946  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250464  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250633  365167 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:26:56.250758  365167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:26:56.250801  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.250862  365167 ssh_runner.go:195] Run: cat /version.json
	I0804 00:26:56.250891  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:26:56.253545  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.253836  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254019  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.254048  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254173  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.254178  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:56.254196  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:56.254335  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.254407  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:26:56.254522  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.254587  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:26:56.254653  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.254704  365167 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:26:56.254841  365167 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:26:56.330901  365167 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0804 00:26:56.331124  365167 ssh_runner.go:195] Run: systemctl --version
	I0804 00:26:56.354072  365167 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 00:26:56.354913  365167 command_runner.go:130] > systemd 252 (252)
	I0804 00:26:56.354967  365167 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0804 00:26:56.355035  365167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:26:56.521151  365167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 00:26:56.528298  365167 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0804 00:26:56.528352  365167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:26:56.528416  365167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:26:56.538664  365167 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 00:26:56.538699  365167 start.go:495] detecting cgroup driver to use...
	I0804 00:26:56.538772  365167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:26:56.558580  365167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:26:56.573289  365167 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:26:56.573355  365167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:26:56.588131  365167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:26:56.602596  365167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:26:56.760968  365167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:26:56.922806  365167 docker.go:233] disabling docker service ...
	I0804 00:26:56.922891  365167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:26:56.943968  365167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:26:56.959205  365167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:26:57.108075  365167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:26:57.248638  365167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:26:57.265297  365167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:26:57.285174  365167 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0804 00:26:57.285218  365167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:26:57.285282  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.302481  365167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:26:57.302563  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.316640  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.330841  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.342444  365167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:26:57.390376  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.424164  365167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.453285  365167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:26:57.474444  365167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:26:57.485887  365167 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 00:26:57.486184  365167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:26:57.497288  365167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:26:57.655627  365167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:26:57.927356  365167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:26:57.927428  365167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:26:57.932469  365167 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0804 00:26:57.932504  365167 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 00:26:57.932511  365167 command_runner.go:130] > Device: 0,22	Inode: 1396        Links: 1
	I0804 00:26:57.932517  365167 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 00:26:57.932532  365167 command_runner.go:130] > Access: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932543  365167 command_runner.go:130] > Modify: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932550  365167 command_runner.go:130] > Change: 2024-08-04 00:26:57.775428491 +0000
	I0804 00:26:57.932556  365167 command_runner.go:130] >  Birth: -
	I0804 00:26:57.932601  365167 start.go:563] Will wait 60s for crictl version
	I0804 00:26:57.932659  365167 ssh_runner.go:195] Run: which crictl
	I0804 00:26:57.936968  365167 command_runner.go:130] > /usr/bin/crictl
	I0804 00:26:57.937058  365167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:26:57.983207  365167 command_runner.go:130] > Version:  0.1.0
	I0804 00:26:57.983235  365167 command_runner.go:130] > RuntimeName:  cri-o
	I0804 00:26:57.983240  365167 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0804 00:26:57.983245  365167 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 00:26:57.983399  365167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:26:57.983510  365167 ssh_runner.go:195] Run: crio --version
	I0804 00:26:58.013573  365167 command_runner.go:130] > crio version 1.29.1
	I0804 00:26:58.013600  365167 command_runner.go:130] > Version:        1.29.1
	I0804 00:26:58.013608  365167 command_runner.go:130] > GitCommit:      unknown
	I0804 00:26:58.013614  365167 command_runner.go:130] > GitCommitDate:  unknown
	I0804 00:26:58.013619  365167 command_runner.go:130] > GitTreeState:   clean
	I0804 00:26:58.013626  365167 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 00:26:58.013631  365167 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 00:26:58.013636  365167 command_runner.go:130] > Compiler:       gc
	I0804 00:26:58.013642  365167 command_runner.go:130] > Platform:       linux/amd64
	I0804 00:26:58.013648  365167 command_runner.go:130] > Linkmode:       dynamic
	I0804 00:26:58.013655  365167 command_runner.go:130] > BuildTags:      
	I0804 00:26:58.013662  365167 command_runner.go:130] >   containers_image_ostree_stub
	I0804 00:26:58.013669  365167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 00:26:58.013679  365167 command_runner.go:130] >   btrfs_noversion
	I0804 00:26:58.013687  365167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 00:26:58.013694  365167 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 00:26:58.013700  365167 command_runner.go:130] >   seccomp
	I0804 00:26:58.013710  365167 command_runner.go:130] > LDFlags:          unknown
	I0804 00:26:58.013717  365167 command_runner.go:130] > SeccompEnabled:   true
	I0804 00:26:58.013735  365167 command_runner.go:130] > AppArmorEnabled:  false
	I0804 00:26:58.015071  365167 ssh_runner.go:195] Run: crio --version
	I0804 00:26:58.046148  365167 command_runner.go:130] > crio version 1.29.1
	I0804 00:26:58.046178  365167 command_runner.go:130] > Version:        1.29.1
	I0804 00:26:58.046186  365167 command_runner.go:130] > GitCommit:      unknown
	I0804 00:26:58.046192  365167 command_runner.go:130] > GitCommitDate:  unknown
	I0804 00:26:58.046197  365167 command_runner.go:130] > GitTreeState:   clean
	I0804 00:26:58.046203  365167 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 00:26:58.046207  365167 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 00:26:58.046211  365167 command_runner.go:130] > Compiler:       gc
	I0804 00:26:58.046217  365167 command_runner.go:130] > Platform:       linux/amd64
	I0804 00:26:58.046223  365167 command_runner.go:130] > Linkmode:       dynamic
	I0804 00:26:58.046235  365167 command_runner.go:130] > BuildTags:      
	I0804 00:26:58.046242  365167 command_runner.go:130] >   containers_image_ostree_stub
	I0804 00:26:58.046251  365167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 00:26:58.046261  365167 command_runner.go:130] >   btrfs_noversion
	I0804 00:26:58.046270  365167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 00:26:58.046280  365167 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 00:26:58.046289  365167 command_runner.go:130] >   seccomp
	I0804 00:26:58.046299  365167 command_runner.go:130] > LDFlags:          unknown
	I0804 00:26:58.046308  365167 command_runner.go:130] > SeccompEnabled:   true
	I0804 00:26:58.046318  365167 command_runner.go:130] > AppArmorEnabled:  false
	I0804 00:26:58.049234  365167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:26:58.050832  365167 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:26:58.053788  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:58.054115  365167 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:26:58.054147  365167 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:26:58.054381  365167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:26:58.059352  365167 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0804 00:26:58.059504  365167 kubeadm.go:883] updating cluster {Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:26:58.059763  365167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:26:58.059827  365167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:26:58.105400  365167 command_runner.go:130] > {
	I0804 00:26:58.105433  365167 command_runner.go:130] >   "images": [
	I0804 00:26:58.105439  365167 command_runner.go:130] >     {
	I0804 00:26:58.105452  365167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 00:26:58.105459  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105468  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 00:26:58.105474  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105481  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105495  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 00:26:58.105516  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 00:26:58.105526  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105532  365167 command_runner.go:130] >       "size": "87165492",
	I0804 00:26:58.105538  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105546  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105561  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105570  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105576  365167 command_runner.go:130] >     },
	I0804 00:26:58.105584  365167 command_runner.go:130] >     {
	I0804 00:26:58.105606  365167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 00:26:58.105616  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105624  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 00:26:58.105629  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105636  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105648  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 00:26:58.105673  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 00:26:58.105681  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105688  365167 command_runner.go:130] >       "size": "87174707",
	I0804 00:26:58.105697  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105712  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105721  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105732  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105739  365167 command_runner.go:130] >     },
	I0804 00:26:58.105745  365167 command_runner.go:130] >     {
	I0804 00:26:58.105757  365167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 00:26:58.105766  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105776  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 00:26:58.105783  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105789  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105802  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 00:26:58.105815  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 00:26:58.105822  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105828  365167 command_runner.go:130] >       "size": "1363676",
	I0804 00:26:58.105836  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.105844  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.105852  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.105860  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.105865  365167 command_runner.go:130] >     },
	I0804 00:26:58.105872  365167 command_runner.go:130] >     {
	I0804 00:26:58.105881  365167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 00:26:58.105889  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.105899  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 00:26:58.105907  365167 command_runner.go:130] >       ],
	I0804 00:26:58.105913  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.105960  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 00:26:58.105998  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 00:26:58.106006  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106012  365167 command_runner.go:130] >       "size": "31470524",
	I0804 00:26:58.106020  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106029  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106038  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106053  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106061  365167 command_runner.go:130] >     },
	I0804 00:26:58.106068  365167 command_runner.go:130] >     {
	I0804 00:26:58.106079  365167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 00:26:58.106087  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106099  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 00:26:58.106106  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106116  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106131  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 00:26:58.106145  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 00:26:58.106153  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106161  365167 command_runner.go:130] >       "size": "61245718",
	I0804 00:26:58.106170  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106179  365167 command_runner.go:130] >       "username": "nonroot",
	I0804 00:26:58.106188  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106197  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106205  365167 command_runner.go:130] >     },
	I0804 00:26:58.106210  365167 command_runner.go:130] >     {
	I0804 00:26:58.106222  365167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 00:26:58.106231  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106239  365167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 00:26:58.106248  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106256  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106269  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 00:26:58.106281  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 00:26:58.106290  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106300  365167 command_runner.go:130] >       "size": "150779692",
	I0804 00:26:58.106309  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106319  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106324  365167 command_runner.go:130] >       },
	I0804 00:26:58.106331  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106340  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106348  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106356  365167 command_runner.go:130] >     },
	I0804 00:26:58.106361  365167 command_runner.go:130] >     {
	I0804 00:26:58.106371  365167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 00:26:58.106389  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106400  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 00:26:58.106409  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106415  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106427  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 00:26:58.106440  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 00:26:58.106447  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106456  365167 command_runner.go:130] >       "size": "117609954",
	I0804 00:26:58.106464  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106473  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106480  365167 command_runner.go:130] >       },
	I0804 00:26:58.106486  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106495  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106500  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106507  365167 command_runner.go:130] >     },
	I0804 00:26:58.106512  365167 command_runner.go:130] >     {
	I0804 00:26:58.106523  365167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 00:26:58.106531  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106541  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 00:26:58.106549  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106557  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106593  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 00:26:58.106608  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 00:26:58.106617  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106626  365167 command_runner.go:130] >       "size": "112198984",
	I0804 00:26:58.106634  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106642  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106648  365167 command_runner.go:130] >       },
	I0804 00:26:58.106657  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106662  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106667  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106672  365167 command_runner.go:130] >     },
	I0804 00:26:58.106676  365167 command_runner.go:130] >     {
	I0804 00:26:58.106685  365167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 00:26:58.106690  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106698  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 00:26:58.106713  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106720  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106731  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 00:26:58.106741  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 00:26:58.106747  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106753  365167 command_runner.go:130] >       "size": "85953945",
	I0804 00:26:58.106758  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.106764  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106770  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106780  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106787  365167 command_runner.go:130] >     },
	I0804 00:26:58.106793  365167 command_runner.go:130] >     {
	I0804 00:26:58.106804  365167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 00:26:58.106813  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106823  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 00:26:58.106831  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106839  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106850  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 00:26:58.106864  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 00:26:58.106872  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106878  365167 command_runner.go:130] >       "size": "63051080",
	I0804 00:26:58.106886  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.106894  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.106902  365167 command_runner.go:130] >       },
	I0804 00:26:58.106909  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.106917  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.106926  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.106934  365167 command_runner.go:130] >     },
	I0804 00:26:58.106940  365167 command_runner.go:130] >     {
	I0804 00:26:58.106951  365167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 00:26:58.106959  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.106967  365167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 00:26:58.106982  365167 command_runner.go:130] >       ],
	I0804 00:26:58.106987  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.106998  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 00:26:58.107010  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 00:26:58.107029  365167 command_runner.go:130] >       ],
	I0804 00:26:58.107039  365167 command_runner.go:130] >       "size": "750414",
	I0804 00:26:58.107045  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.107068  365167 command_runner.go:130] >         "value": "65535"
	I0804 00:26:58.107077  365167 command_runner.go:130] >       },
	I0804 00:26:58.107083  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.107092  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.107098  365167 command_runner.go:130] >       "pinned": true
	I0804 00:26:58.107105  365167 command_runner.go:130] >     }
	I0804 00:26:58.107110  365167 command_runner.go:130] >   ]
	I0804 00:26:58.107118  365167 command_runner.go:130] > }
	I0804 00:26:58.107414  365167 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:26:58.107434  365167 crio.go:433] Images already preloaded, skipping extraction
	I0804 00:26:58.107496  365167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:26:58.141278  365167 command_runner.go:130] > {
	I0804 00:26:58.141307  365167 command_runner.go:130] >   "images": [
	I0804 00:26:58.141314  365167 command_runner.go:130] >     {
	I0804 00:26:58.141337  365167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 00:26:58.141344  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141354  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 00:26:58.141359  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141366  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141379  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 00:26:58.141392  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 00:26:58.141399  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141405  365167 command_runner.go:130] >       "size": "87165492",
	I0804 00:26:58.141414  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141420  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141430  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141438  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141444  365167 command_runner.go:130] >     },
	I0804 00:26:58.141451  365167 command_runner.go:130] >     {
	I0804 00:26:58.141459  365167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 00:26:58.141466  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141474  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 00:26:58.141482  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141487  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141499  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 00:26:58.141523  365167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 00:26:58.141533  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141539  365167 command_runner.go:130] >       "size": "87174707",
	I0804 00:26:58.141547  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141560  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141568  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141576  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141584  365167 command_runner.go:130] >     },
	I0804 00:26:58.141589  365167 command_runner.go:130] >     {
	I0804 00:26:58.141601  365167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 00:26:58.141610  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141621  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 00:26:58.141629  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141635  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141648  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 00:26:58.141667  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 00:26:58.141675  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141681  365167 command_runner.go:130] >       "size": "1363676",
	I0804 00:26:58.141687  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141696  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141704  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141712  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141718  365167 command_runner.go:130] >     },
	I0804 00:26:58.141723  365167 command_runner.go:130] >     {
	I0804 00:26:58.141734  365167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 00:26:58.141742  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141753  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 00:26:58.141760  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141769  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141780  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 00:26:58.141807  365167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 00:26:58.141815  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141819  365167 command_runner.go:130] >       "size": "31470524",
	I0804 00:26:58.141828  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141837  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.141846  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.141855  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.141863  365167 command_runner.go:130] >     },
	I0804 00:26:58.141871  365167 command_runner.go:130] >     {
	I0804 00:26:58.141880  365167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 00:26:58.141890  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.141900  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 00:26:58.141908  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141914  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.141928  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 00:26:58.141943  365167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 00:26:58.141951  365167 command_runner.go:130] >       ],
	I0804 00:26:58.141960  365167 command_runner.go:130] >       "size": "61245718",
	I0804 00:26:58.141976  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.141985  365167 command_runner.go:130] >       "username": "nonroot",
	I0804 00:26:58.141991  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142007  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142015  365167 command_runner.go:130] >     },
	I0804 00:26:58.142020  365167 command_runner.go:130] >     {
	I0804 00:26:58.142031  365167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 00:26:58.142041  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142050  365167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 00:26:58.142059  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142067  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142077  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 00:26:58.142089  365167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 00:26:58.142097  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142103  365167 command_runner.go:130] >       "size": "150779692",
	I0804 00:26:58.142111  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142120  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142129  365167 command_runner.go:130] >       },
	I0804 00:26:58.142137  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142146  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142155  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142159  365167 command_runner.go:130] >     },
	I0804 00:26:58.142166  365167 command_runner.go:130] >     {
	I0804 00:26:58.142175  365167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 00:26:58.142183  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142192  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 00:26:58.142199  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142206  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142221  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 00:26:58.142233  365167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 00:26:58.142241  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142247  365167 command_runner.go:130] >       "size": "117609954",
	I0804 00:26:58.142254  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142260  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142267  365167 command_runner.go:130] >       },
	I0804 00:26:58.142273  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142282  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142290  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142298  365167 command_runner.go:130] >     },
	I0804 00:26:58.142313  365167 command_runner.go:130] >     {
	I0804 00:26:58.142325  365167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 00:26:58.142333  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142342  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 00:26:58.142350  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142356  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142391  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 00:26:58.142406  365167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 00:26:58.142412  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142419  365167 command_runner.go:130] >       "size": "112198984",
	I0804 00:26:58.142427  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142433  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142440  365167 command_runner.go:130] >       },
	I0804 00:26:58.142456  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142465  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142472  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142479  365167 command_runner.go:130] >     },
	I0804 00:26:58.142485  365167 command_runner.go:130] >     {
	I0804 00:26:58.142497  365167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 00:26:58.142505  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142513  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 00:26:58.142520  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142527  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142538  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 00:26:58.142553  365167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 00:26:58.142561  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142567  365167 command_runner.go:130] >       "size": "85953945",
	I0804 00:26:58.142576  365167 command_runner.go:130] >       "uid": null,
	I0804 00:26:58.142582  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142591  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142598  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142606  365167 command_runner.go:130] >     },
	I0804 00:26:58.142611  365167 command_runner.go:130] >     {
	I0804 00:26:58.142621  365167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 00:26:58.142630  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142639  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 00:26:58.142656  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142677  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142691  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 00:26:58.142704  365167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 00:26:58.142709  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142719  365167 command_runner.go:130] >       "size": "63051080",
	I0804 00:26:58.142725  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142733  365167 command_runner.go:130] >         "value": "0"
	I0804 00:26:58.142739  365167 command_runner.go:130] >       },
	I0804 00:26:58.142765  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142774  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142781  365167 command_runner.go:130] >       "pinned": false
	I0804 00:26:58.142789  365167 command_runner.go:130] >     },
	I0804 00:26:58.142794  365167 command_runner.go:130] >     {
	I0804 00:26:58.142805  365167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 00:26:58.142814  365167 command_runner.go:130] >       "repoTags": [
	I0804 00:26:58.142821  365167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 00:26:58.142829  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142835  365167 command_runner.go:130] >       "repoDigests": [
	I0804 00:26:58.142848  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 00:26:58.142864  365167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 00:26:58.142872  365167 command_runner.go:130] >       ],
	I0804 00:26:58.142879  365167 command_runner.go:130] >       "size": "750414",
	I0804 00:26:58.142888  365167 command_runner.go:130] >       "uid": {
	I0804 00:26:58.142895  365167 command_runner.go:130] >         "value": "65535"
	I0804 00:26:58.142900  365167 command_runner.go:130] >       },
	I0804 00:26:58.142908  365167 command_runner.go:130] >       "username": "",
	I0804 00:26:58.142914  365167 command_runner.go:130] >       "spec": null,
	I0804 00:26:58.142923  365167 command_runner.go:130] >       "pinned": true
	I0804 00:26:58.142928  365167 command_runner.go:130] >     }
	I0804 00:26:58.142935  365167 command_runner.go:130] >   ]
	I0804 00:26:58.142940  365167 command_runner.go:130] > }
	I0804 00:26:58.143152  365167 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:26:58.143168  365167 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:26:58.143177  365167 kubeadm.go:934] updating node { 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0804 00:26:58.143327  365167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-453015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:26:58.143409  365167 ssh_runner.go:195] Run: crio config
	I0804 00:26:58.186081  365167 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0804 00:26:58.186118  365167 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0804 00:26:58.186128  365167 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0804 00:26:58.186133  365167 command_runner.go:130] > #
	I0804 00:26:58.186165  365167 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0804 00:26:58.186176  365167 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0804 00:26:58.186186  365167 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0804 00:26:58.186199  365167 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0804 00:26:58.186206  365167 command_runner.go:130] > # reload'.
	I0804 00:26:58.186215  365167 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0804 00:26:58.186230  365167 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0804 00:26:58.186239  365167 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0804 00:26:58.186249  365167 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0804 00:26:58.186255  365167 command_runner.go:130] > [crio]
	I0804 00:26:58.186264  365167 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0804 00:26:58.186275  365167 command_runner.go:130] > # containers images, in this directory.
	I0804 00:26:58.186283  365167 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0804 00:26:58.186300  365167 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0804 00:26:58.186308  365167 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0804 00:26:58.186321  365167 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0804 00:26:58.186644  365167 command_runner.go:130] > # imagestore = ""
	I0804 00:26:58.186669  365167 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0804 00:26:58.186679  365167 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0804 00:26:58.186789  365167 command_runner.go:130] > storage_driver = "overlay"
	I0804 00:26:58.186806  365167 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0804 00:26:58.186822  365167 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0804 00:26:58.186828  365167 command_runner.go:130] > storage_option = [
	I0804 00:26:58.187045  365167 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0804 00:26:58.187061  365167 command_runner.go:130] > ]
	I0804 00:26:58.187072  365167 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0804 00:26:58.187080  365167 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0804 00:26:58.187116  365167 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0804 00:26:58.187129  365167 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0804 00:26:58.187135  365167 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0804 00:26:58.187156  365167 command_runner.go:130] > # always happen on a node reboot
	I0804 00:26:58.187369  365167 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0804 00:26:58.187388  365167 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0804 00:26:58.187395  365167 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0804 00:26:58.187400  365167 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0804 00:26:58.187502  365167 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0804 00:26:58.187525  365167 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0804 00:26:58.187539  365167 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0804 00:26:58.187825  365167 command_runner.go:130] > # internal_wipe = true
	I0804 00:26:58.187844  365167 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0804 00:26:58.187853  365167 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0804 00:26:58.188056  365167 command_runner.go:130] > # internal_repair = false
	I0804 00:26:58.188067  365167 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0804 00:26:58.188074  365167 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0804 00:26:58.188079  365167 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0804 00:26:58.188336  365167 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0804 00:26:58.188352  365167 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0804 00:26:58.188359  365167 command_runner.go:130] > [crio.api]
	I0804 00:26:58.188367  365167 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0804 00:26:58.188596  365167 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0804 00:26:58.188612  365167 command_runner.go:130] > # IP address on which the stream server will listen.
	I0804 00:26:58.188943  365167 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0804 00:26:58.188972  365167 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0804 00:26:58.188981  365167 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0804 00:26:58.189240  365167 command_runner.go:130] > # stream_port = "0"
	I0804 00:26:58.189255  365167 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0804 00:26:58.189538  365167 command_runner.go:130] > # stream_enable_tls = false
	I0804 00:26:58.189555  365167 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0804 00:26:58.189843  365167 command_runner.go:130] > # stream_idle_timeout = ""
	I0804 00:26:58.189858  365167 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0804 00:26:58.189868  365167 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0804 00:26:58.189874  365167 command_runner.go:130] > # minutes.
	I0804 00:26:58.190088  365167 command_runner.go:130] > # stream_tls_cert = ""
	I0804 00:26:58.190104  365167 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0804 00:26:58.190114  365167 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0804 00:26:58.190288  365167 command_runner.go:130] > # stream_tls_key = ""
	I0804 00:26:58.190301  365167 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0804 00:26:58.190311  365167 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0804 00:26:58.190364  365167 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0804 00:26:58.190556  365167 command_runner.go:130] > # stream_tls_ca = ""
	I0804 00:26:58.190569  365167 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 00:26:58.190761  365167 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0804 00:26:58.190772  365167 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 00:26:58.190887  365167 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0804 00:26:58.190910  365167 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0804 00:26:58.190923  365167 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0804 00:26:58.190930  365167 command_runner.go:130] > [crio.runtime]
	I0804 00:26:58.190936  365167 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0804 00:26:58.190945  365167 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0804 00:26:58.190948  365167 command_runner.go:130] > # "nofile=1024:2048"
	I0804 00:26:58.190956  365167 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0804 00:26:58.191058  365167 command_runner.go:130] > # default_ulimits = [
	I0804 00:26:58.191211  365167 command_runner.go:130] > # ]
	I0804 00:26:58.191221  365167 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0804 00:26:58.191545  365167 command_runner.go:130] > # no_pivot = false
	I0804 00:26:58.191561  365167 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0804 00:26:58.191571  365167 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0804 00:26:58.191994  365167 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0804 00:26:58.192010  365167 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0804 00:26:58.192018  365167 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0804 00:26:58.192028  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 00:26:58.192095  365167 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0804 00:26:58.192115  365167 command_runner.go:130] > # Cgroup setting for conmon
	I0804 00:26:58.192126  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0804 00:26:58.192257  365167 command_runner.go:130] > conmon_cgroup = "pod"
	I0804 00:26:58.192272  365167 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0804 00:26:58.192280  365167 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0804 00:26:58.192290  365167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 00:26:58.192299  365167 command_runner.go:130] > conmon_env = [
	I0804 00:26:58.192362  365167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 00:26:58.192419  365167 command_runner.go:130] > ]
	I0804 00:26:58.192431  365167 command_runner.go:130] > # Additional environment variables to set for all the
	I0804 00:26:58.192441  365167 command_runner.go:130] > # containers. These are overridden if set in the
	I0804 00:26:58.192453  365167 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0804 00:26:58.192568  365167 command_runner.go:130] > # default_env = [
	I0804 00:26:58.192778  365167 command_runner.go:130] > # ]
	I0804 00:26:58.192796  365167 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0804 00:26:58.192807  365167 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0804 00:26:58.192839  365167 command_runner.go:130] > # selinux = false
	I0804 00:26:58.192855  365167 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0804 00:26:58.192868  365167 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0804 00:26:58.192881  365167 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0804 00:26:58.192906  365167 command_runner.go:130] > # seccomp_profile = ""
	I0804 00:26:58.192920  365167 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0804 00:26:58.192930  365167 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0804 00:26:58.192949  365167 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0804 00:26:58.192961  365167 command_runner.go:130] > # which might increase security.
	I0804 00:26:58.192972  365167 command_runner.go:130] > # This option is currently deprecated,
	I0804 00:26:58.192984  365167 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0804 00:26:58.192994  365167 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0804 00:26:58.193007  365167 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0804 00:26:58.193019  365167 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0804 00:26:58.193033  365167 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0804 00:26:58.193042  365167 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0804 00:26:58.193053  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.193065  365167 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0804 00:26:58.193077  365167 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0804 00:26:58.193087  365167 command_runner.go:130] > # the cgroup blockio controller.
	I0804 00:26:58.193094  365167 command_runner.go:130] > # blockio_config_file = ""
	I0804 00:26:58.193106  365167 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0804 00:26:58.193116  365167 command_runner.go:130] > # blockio parameters.
	I0804 00:26:58.193128  365167 command_runner.go:130] > # blockio_reload = false
	I0804 00:26:58.193137  365167 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0804 00:26:58.193146  365167 command_runner.go:130] > # irqbalance daemon.
	I0804 00:26:58.193155  365167 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0804 00:26:58.193167  365167 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0804 00:26:58.193181  365167 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0804 00:26:58.193194  365167 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0804 00:26:58.193206  365167 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0804 00:26:58.193220  365167 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0804 00:26:58.193232  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.193248  365167 command_runner.go:130] > # rdt_config_file = ""
	I0804 00:26:58.193260  365167 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0804 00:26:58.193268  365167 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0804 00:26:58.193313  365167 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0804 00:26:58.193324  365167 command_runner.go:130] > # separate_pull_cgroup = ""
	I0804 00:26:58.193335  365167 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0804 00:26:58.193347  365167 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0804 00:26:58.193355  365167 command_runner.go:130] > # will be added.
	I0804 00:26:58.193362  365167 command_runner.go:130] > # default_capabilities = [
	I0804 00:26:58.193370  365167 command_runner.go:130] > # 	"CHOWN",
	I0804 00:26:58.193377  365167 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0804 00:26:58.193386  365167 command_runner.go:130] > # 	"FSETID",
	I0804 00:26:58.193392  365167 command_runner.go:130] > # 	"FOWNER",
	I0804 00:26:58.193398  365167 command_runner.go:130] > # 	"SETGID",
	I0804 00:26:58.193407  365167 command_runner.go:130] > # 	"SETUID",
	I0804 00:26:58.193413  365167 command_runner.go:130] > # 	"SETPCAP",
	I0804 00:26:58.193422  365167 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0804 00:26:58.193429  365167 command_runner.go:130] > # 	"KILL",
	I0804 00:26:58.193437  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193449  365167 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0804 00:26:58.193462  365167 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0804 00:26:58.193473  365167 command_runner.go:130] > # add_inheritable_capabilities = false
	I0804 00:26:58.193484  365167 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0804 00:26:58.193498  365167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 00:26:58.193515  365167 command_runner.go:130] > default_sysctls = [
	I0804 00:26:58.193526  365167 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0804 00:26:58.193532  365167 command_runner.go:130] > ]
	I0804 00:26:58.193539  365167 command_runner.go:130] > # List of devices on the host that a
	I0804 00:26:58.193549  365167 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0804 00:26:58.193558  365167 command_runner.go:130] > # allowed_devices = [
	I0804 00:26:58.193565  365167 command_runner.go:130] > # 	"/dev/fuse",
	I0804 00:26:58.193572  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193580  365167 command_runner.go:130] > # List of additional devices. specified as
	I0804 00:26:58.193594  365167 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0804 00:26:58.193606  365167 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0804 00:26:58.193618  365167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 00:26:58.193635  365167 command_runner.go:130] > # additional_devices = [
	I0804 00:26:58.193643  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193649  365167 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0804 00:26:58.193656  365167 command_runner.go:130] > # cdi_spec_dirs = [
	I0804 00:26:58.193659  365167 command_runner.go:130] > # 	"/etc/cdi",
	I0804 00:26:58.193663  365167 command_runner.go:130] > # 	"/var/run/cdi",
	I0804 00:26:58.193666  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193673  365167 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0804 00:26:58.193681  365167 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0804 00:26:58.193685  365167 command_runner.go:130] > # Defaults to false.
	I0804 00:26:58.193694  365167 command_runner.go:130] > # device_ownership_from_security_context = false
	I0804 00:26:58.193703  365167 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0804 00:26:58.193710  365167 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0804 00:26:58.193720  365167 command_runner.go:130] > # hooks_dir = [
	I0804 00:26:58.193728  365167 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0804 00:26:58.193743  365167 command_runner.go:130] > # ]
	I0804 00:26:58.193753  365167 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0804 00:26:58.193766  365167 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0804 00:26:58.193773  365167 command_runner.go:130] > # its default mounts from the following two files:
	I0804 00:26:58.193781  365167 command_runner.go:130] > #
	I0804 00:26:58.193791  365167 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0804 00:26:58.193804  365167 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0804 00:26:58.193815  365167 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0804 00:26:58.193823  365167 command_runner.go:130] > #
	I0804 00:26:58.193832  365167 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0804 00:26:58.193844  365167 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0804 00:26:58.193856  365167 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0804 00:26:58.193864  365167 command_runner.go:130] > #      only add mounts it finds in this file.
	I0804 00:26:58.193873  365167 command_runner.go:130] > #
	I0804 00:26:58.193879  365167 command_runner.go:130] > # default_mounts_file = ""
	I0804 00:26:58.193891  365167 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0804 00:26:58.193903  365167 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0804 00:26:58.193911  365167 command_runner.go:130] > pids_limit = 1024
	I0804 00:26:58.193920  365167 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0804 00:26:58.193933  365167 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0804 00:26:58.193954  365167 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0804 00:26:58.193978  365167 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0804 00:26:58.193987  365167 command_runner.go:130] > # log_size_max = -1
	I0804 00:26:58.193998  365167 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0804 00:26:58.194007  365167 command_runner.go:130] > # log_to_journald = false
	I0804 00:26:58.194017  365167 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0804 00:26:58.194029  365167 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0804 00:26:58.194041  365167 command_runner.go:130] > # Path to directory for container attach sockets.
	I0804 00:26:58.194052  365167 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0804 00:26:58.194063  365167 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0804 00:26:58.194073  365167 command_runner.go:130] > # bind_mount_prefix = ""
	I0804 00:26:58.194080  365167 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0804 00:26:58.194089  365167 command_runner.go:130] > # read_only = false
	I0804 00:26:58.194096  365167 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0804 00:26:58.194108  365167 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0804 00:26:58.194117  365167 command_runner.go:130] > # live configuration reload.
	I0804 00:26:58.194124  365167 command_runner.go:130] > # log_level = "info"
	I0804 00:26:58.194136  365167 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0804 00:26:58.194146  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.194154  365167 command_runner.go:130] > # log_filter = ""
	I0804 00:26:58.194175  365167 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0804 00:26:58.194188  365167 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0804 00:26:58.194197  365167 command_runner.go:130] > # separated by comma.
	I0804 00:26:58.194208  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194217  365167 command_runner.go:130] > # uid_mappings = ""
	I0804 00:26:58.194228  365167 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0804 00:26:58.194240  365167 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0804 00:26:58.194250  365167 command_runner.go:130] > # separated by comma.
	I0804 00:26:58.194261  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194270  365167 command_runner.go:130] > # gid_mappings = ""
	I0804 00:26:58.194279  365167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0804 00:26:58.194305  365167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 00:26:58.194317  365167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 00:26:58.194328  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194338  365167 command_runner.go:130] > # minimum_mappable_uid = -1
	I0804 00:26:58.194348  365167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0804 00:26:58.194360  365167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 00:26:58.194378  365167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 00:26:58.194393  365167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 00:26:58.194403  365167 command_runner.go:130] > # minimum_mappable_gid = -1
	I0804 00:26:58.194412  365167 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0804 00:26:58.194424  365167 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0804 00:26:58.194436  365167 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0804 00:26:58.194444  365167 command_runner.go:130] > # ctr_stop_timeout = 30
	I0804 00:26:58.194455  365167 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0804 00:26:58.194466  365167 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0804 00:26:58.194475  365167 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0804 00:26:58.194486  365167 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0804 00:26:58.194495  365167 command_runner.go:130] > drop_infra_ctr = false
	I0804 00:26:58.194504  365167 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0804 00:26:58.194516  365167 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0804 00:26:58.194531  365167 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0804 00:26:58.194541  365167 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0804 00:26:58.194553  365167 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0804 00:26:58.194565  365167 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0804 00:26:58.194576  365167 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0804 00:26:58.194588  365167 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0804 00:26:58.194597  365167 command_runner.go:130] > # shared_cpuset = ""
	I0804 00:26:58.194606  365167 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0804 00:26:58.194614  365167 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0804 00:26:58.194621  365167 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0804 00:26:58.194640  365167 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0804 00:26:58.194650  365167 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0804 00:26:58.194659  365167 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0804 00:26:58.194672  365167 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0804 00:26:58.194681  365167 command_runner.go:130] > # enable_criu_support = false
	I0804 00:26:58.194689  365167 command_runner.go:130] > # Enable/disable the generation of the container,
	I0804 00:26:58.194698  365167 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0804 00:26:58.194702  365167 command_runner.go:130] > # enable_pod_events = false
	I0804 00:26:58.194712  365167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 00:26:58.194725  365167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 00:26:58.194736  365167 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0804 00:26:58.194747  365167 command_runner.go:130] > # default_runtime = "runc"
	I0804 00:26:58.194767  365167 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0804 00:26:58.194780  365167 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0804 00:26:58.194796  365167 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0804 00:26:58.194809  365167 command_runner.go:130] > # creation as a file is not desired either.
	I0804 00:26:58.194822  365167 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0804 00:26:58.194834  365167 command_runner.go:130] > # the hostname is being managed dynamically.
	I0804 00:26:58.194840  365167 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0804 00:26:58.194846  365167 command_runner.go:130] > # ]
	I0804 00:26:58.194859  365167 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0804 00:26:58.194872  365167 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0804 00:26:58.194885  365167 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0804 00:26:58.194893  365167 command_runner.go:130] > # Each entry in the table should follow the format:
	I0804 00:26:58.194901  365167 command_runner.go:130] > #
	I0804 00:26:58.194910  365167 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0804 00:26:58.194921  365167 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0804 00:26:58.194992  365167 command_runner.go:130] > # runtime_type = "oci"
	I0804 00:26:58.195004  365167 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0804 00:26:58.195011  365167 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0804 00:26:58.195019  365167 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0804 00:26:58.195030  365167 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0804 00:26:58.195038  365167 command_runner.go:130] > # monitor_env = []
	I0804 00:26:58.195049  365167 command_runner.go:130] > # privileged_without_host_devices = false
	I0804 00:26:58.195058  365167 command_runner.go:130] > # allowed_annotations = []
	I0804 00:26:58.195067  365167 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0804 00:26:58.195076  365167 command_runner.go:130] > # Where:
	I0804 00:26:58.195083  365167 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0804 00:26:58.195095  365167 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0804 00:26:58.195105  365167 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0804 00:26:58.195118  365167 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0804 00:26:58.195127  365167 command_runner.go:130] > #   in $PATH.
	I0804 00:26:58.195136  365167 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0804 00:26:58.195147  365167 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0804 00:26:58.195159  365167 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0804 00:26:58.195163  365167 command_runner.go:130] > #   state.
	I0804 00:26:58.195170  365167 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0804 00:26:58.195182  365167 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0804 00:26:58.195202  365167 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0804 00:26:58.195220  365167 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0804 00:26:58.195233  365167 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0804 00:26:58.195246  365167 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0804 00:26:58.195257  365167 command_runner.go:130] > #   The currently recognized values are:
	I0804 00:26:58.195266  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0804 00:26:58.195280  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0804 00:26:58.195293  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0804 00:26:58.195305  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0804 00:26:58.195319  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0804 00:26:58.195332  365167 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0804 00:26:58.195345  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0804 00:26:58.195356  365167 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0804 00:26:58.195365  365167 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0804 00:26:58.195377  365167 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0804 00:26:58.195387  365167 command_runner.go:130] > #   deprecated option "conmon".
	I0804 00:26:58.195398  365167 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0804 00:26:58.195410  365167 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0804 00:26:58.195423  365167 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0804 00:26:58.195433  365167 command_runner.go:130] > #   should be moved to the container's cgroup
	I0804 00:26:58.195446  365167 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0804 00:26:58.195455  365167 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0804 00:26:58.195464  365167 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0804 00:26:58.195475  365167 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0804 00:26:58.195484  365167 command_runner.go:130] > #
	I0804 00:26:58.195492  365167 command_runner.go:130] > # Using the seccomp notifier feature:
	I0804 00:26:58.195501  365167 command_runner.go:130] > #
	I0804 00:26:58.195510  365167 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0804 00:26:58.195522  365167 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0804 00:26:58.195530  365167 command_runner.go:130] > #
	I0804 00:26:58.195542  365167 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0804 00:26:58.195553  365167 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0804 00:26:58.195558  365167 command_runner.go:130] > #
	I0804 00:26:58.195569  365167 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0804 00:26:58.195578  365167 command_runner.go:130] > # feature.
	I0804 00:26:58.195583  365167 command_runner.go:130] > #
	I0804 00:26:58.195601  365167 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0804 00:26:58.195613  365167 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0804 00:26:58.195626  365167 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0804 00:26:58.195638  365167 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0804 00:26:58.195741  365167 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0804 00:26:58.195756  365167 command_runner.go:130] > #
	I0804 00:26:58.195769  365167 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0804 00:26:58.195845  365167 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0804 00:26:58.195865  365167 command_runner.go:130] > #
	I0804 00:26:58.195885  365167 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0804 00:26:58.195906  365167 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0804 00:26:58.195914  365167 command_runner.go:130] > #
	I0804 00:26:58.195927  365167 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0804 00:26:58.195939  365167 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0804 00:26:58.195948  365167 command_runner.go:130] > # limitation.
	I0804 00:26:58.195959  365167 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0804 00:26:58.195969  365167 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0804 00:26:58.195977  365167 command_runner.go:130] > runtime_type = "oci"
	I0804 00:26:58.195987  365167 command_runner.go:130] > runtime_root = "/run/runc"
	I0804 00:26:58.195997  365167 command_runner.go:130] > runtime_config_path = ""
	I0804 00:26:58.196007  365167 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0804 00:26:58.196016  365167 command_runner.go:130] > monitor_cgroup = "pod"
	I0804 00:26:58.196026  365167 command_runner.go:130] > monitor_exec_cgroup = ""
	I0804 00:26:58.196034  365167 command_runner.go:130] > monitor_env = [
	I0804 00:26:58.196042  365167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 00:26:58.196048  365167 command_runner.go:130] > ]
	I0804 00:26:58.196056  365167 command_runner.go:130] > privileged_without_host_devices = false
	I0804 00:26:58.196069  365167 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0804 00:26:58.196081  365167 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0804 00:26:58.196094  365167 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0804 00:26:58.196109  365167 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0804 00:26:58.196125  365167 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0804 00:26:58.196135  365167 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0804 00:26:58.196149  365167 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0804 00:26:58.196165  365167 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0804 00:26:58.196178  365167 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0804 00:26:58.196200  365167 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0804 00:26:58.196209  365167 command_runner.go:130] > # Example:
	I0804 00:26:58.196217  365167 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0804 00:26:58.196224  365167 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0804 00:26:58.196230  365167 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0804 00:26:58.196235  365167 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0804 00:26:58.196241  365167 command_runner.go:130] > # cpuset = 0
	I0804 00:26:58.196247  365167 command_runner.go:130] > # cpushares = "0-1"
	I0804 00:26:58.196252  365167 command_runner.go:130] > # Where:
	I0804 00:26:58.196261  365167 command_runner.go:130] > # The workload name is workload-type.
	I0804 00:26:58.196272  365167 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0804 00:26:58.196280  365167 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0804 00:26:58.196289  365167 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0804 00:26:58.196301  365167 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0804 00:26:58.196310  365167 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0804 00:26:58.196315  365167 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0804 00:26:58.196321  365167 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0804 00:26:58.196328  365167 command_runner.go:130] > # Default value is set to true
	I0804 00:26:58.196335  365167 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0804 00:26:58.196344  365167 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0804 00:26:58.196351  365167 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0804 00:26:58.196358  365167 command_runner.go:130] > # Default value is set to 'false'
	I0804 00:26:58.196365  365167 command_runner.go:130] > # disable_hostport_mapping = false
	I0804 00:26:58.196374  365167 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0804 00:26:58.196378  365167 command_runner.go:130] > #
	I0804 00:26:58.196387  365167 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0804 00:26:58.196396  365167 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0804 00:26:58.196403  365167 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0804 00:26:58.196410  365167 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0804 00:26:58.196418  365167 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0804 00:26:58.196424  365167 command_runner.go:130] > [crio.image]
	I0804 00:26:58.196434  365167 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0804 00:26:58.196450  365167 command_runner.go:130] > # default_transport = "docker://"
	I0804 00:26:58.196462  365167 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0804 00:26:58.196474  365167 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0804 00:26:58.196483  365167 command_runner.go:130] > # global_auth_file = ""
	I0804 00:26:58.196504  365167 command_runner.go:130] > # The image used to instantiate infra containers.
	I0804 00:26:58.196516  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.196524  365167 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0804 00:26:58.196537  365167 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0804 00:26:58.196548  365167 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0804 00:26:58.196560  365167 command_runner.go:130] > # This option supports live configuration reload.
	I0804 00:26:58.196570  365167 command_runner.go:130] > # pause_image_auth_file = ""
	I0804 00:26:58.196581  365167 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0804 00:26:58.196590  365167 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0804 00:26:58.196602  365167 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0804 00:26:58.196614  365167 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0804 00:26:58.196627  365167 command_runner.go:130] > # pause_command = "/pause"
	I0804 00:26:58.196639  365167 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0804 00:26:58.196651  365167 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0804 00:26:58.196667  365167 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0804 00:26:58.196678  365167 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0804 00:26:58.196687  365167 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0804 00:26:58.196700  365167 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0804 00:26:58.196710  365167 command_runner.go:130] > # pinned_images = [
	I0804 00:26:58.196715  365167 command_runner.go:130] > # ]
	I0804 00:26:58.196728  365167 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0804 00:26:58.196740  365167 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0804 00:26:58.196753  365167 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0804 00:26:58.196765  365167 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0804 00:26:58.196776  365167 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0804 00:26:58.196782  365167 command_runner.go:130] > # signature_policy = ""
	I0804 00:26:58.196790  365167 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0804 00:26:58.196804  365167 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0804 00:26:58.196818  365167 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0804 00:26:58.196830  365167 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0804 00:26:58.196842  365167 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0804 00:26:58.196864  365167 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0804 00:26:58.196873  365167 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0804 00:26:58.196886  365167 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0804 00:26:58.196896  365167 command_runner.go:130] > # changing them here.
	I0804 00:26:58.196903  365167 command_runner.go:130] > # insecure_registries = [
	I0804 00:26:58.196919  365167 command_runner.go:130] > # ]
	I0804 00:26:58.196933  365167 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0804 00:26:58.196947  365167 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0804 00:26:58.196957  365167 command_runner.go:130] > # image_volumes = "mkdir"
	I0804 00:26:58.196967  365167 command_runner.go:130] > # Temporary directory to use for storing big files
	I0804 00:26:58.196975  365167 command_runner.go:130] > # big_files_temporary_dir = ""
	I0804 00:26:58.196985  365167 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0804 00:26:58.196994  365167 command_runner.go:130] > # CNI plugins.
	I0804 00:26:58.197000  365167 command_runner.go:130] > [crio.network]
	I0804 00:26:58.197013  365167 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0804 00:26:58.197025  365167 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0804 00:26:58.197034  365167 command_runner.go:130] > # cni_default_network = ""
	I0804 00:26:58.197043  365167 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0804 00:26:58.197052  365167 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0804 00:26:58.197064  365167 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0804 00:26:58.197073  365167 command_runner.go:130] > # plugin_dirs = [
	I0804 00:26:58.197083  365167 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0804 00:26:58.197088  365167 command_runner.go:130] > # ]
	I0804 00:26:58.197099  365167 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0804 00:26:58.197116  365167 command_runner.go:130] > [crio.metrics]
	I0804 00:26:58.197126  365167 command_runner.go:130] > # Globally enable or disable metrics support.
	I0804 00:26:58.197136  365167 command_runner.go:130] > enable_metrics = true
	I0804 00:26:58.197146  365167 command_runner.go:130] > # Specify enabled metrics collectors.
	I0804 00:26:58.197154  365167 command_runner.go:130] > # Per default all metrics are enabled.
	I0804 00:26:58.197164  365167 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0804 00:26:58.197177  365167 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0804 00:26:58.197189  365167 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0804 00:26:58.197199  365167 command_runner.go:130] > # metrics_collectors = [
	I0804 00:26:58.197208  365167 command_runner.go:130] > # 	"operations",
	I0804 00:26:58.197219  365167 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0804 00:26:58.197235  365167 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0804 00:26:58.197244  365167 command_runner.go:130] > # 	"operations_errors",
	I0804 00:26:58.197253  365167 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0804 00:26:58.197260  365167 command_runner.go:130] > # 	"image_pulls_by_name",
	I0804 00:26:58.197265  365167 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0804 00:26:58.197275  365167 command_runner.go:130] > # 	"image_pulls_failures",
	I0804 00:26:58.197297  365167 command_runner.go:130] > # 	"image_pulls_successes",
	I0804 00:26:58.197307  365167 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0804 00:26:58.197317  365167 command_runner.go:130] > # 	"image_layer_reuse",
	I0804 00:26:58.197327  365167 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0804 00:26:58.197336  365167 command_runner.go:130] > # 	"containers_oom_total",
	I0804 00:26:58.197345  365167 command_runner.go:130] > # 	"containers_oom",
	I0804 00:26:58.197353  365167 command_runner.go:130] > # 	"processes_defunct",
	I0804 00:26:58.197360  365167 command_runner.go:130] > # 	"operations_total",
	I0804 00:26:58.197365  365167 command_runner.go:130] > # 	"operations_latency_seconds",
	I0804 00:26:58.197374  365167 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0804 00:26:58.197384  365167 command_runner.go:130] > # 	"operations_errors_total",
	I0804 00:26:58.197394  365167 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0804 00:26:58.197405  365167 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0804 00:26:58.197415  365167 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0804 00:26:58.197424  365167 command_runner.go:130] > # 	"image_pulls_success_total",
	I0804 00:26:58.197433  365167 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0804 00:26:58.197443  365167 command_runner.go:130] > # 	"containers_oom_count_total",
	I0804 00:26:58.197452  365167 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0804 00:26:58.197461  365167 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0804 00:26:58.197469  365167 command_runner.go:130] > # ]
	I0804 00:26:58.197481  365167 command_runner.go:130] > # The port on which the metrics server will listen.
	I0804 00:26:58.197490  365167 command_runner.go:130] > # metrics_port = 9090
	I0804 00:26:58.197501  365167 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0804 00:26:58.197526  365167 command_runner.go:130] > # metrics_socket = ""
	I0804 00:26:58.197534  365167 command_runner.go:130] > # The certificate for the secure metrics server.
	I0804 00:26:58.197547  365167 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0804 00:26:58.197560  365167 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0804 00:26:58.197570  365167 command_runner.go:130] > # certificate on any modification event.
	I0804 00:26:58.197578  365167 command_runner.go:130] > # metrics_cert = ""
	I0804 00:26:58.197589  365167 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0804 00:26:58.197600  365167 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0804 00:26:58.197612  365167 command_runner.go:130] > # metrics_key = ""
	I0804 00:26:58.197624  365167 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0804 00:26:58.197633  365167 command_runner.go:130] > [crio.tracing]
	I0804 00:26:58.197643  365167 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0804 00:26:58.197653  365167 command_runner.go:130] > # enable_tracing = false
	I0804 00:26:58.197675  365167 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0804 00:26:58.197685  365167 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0804 00:26:58.197699  365167 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0804 00:26:58.197706  365167 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0804 00:26:58.197712  365167 command_runner.go:130] > # CRI-O NRI configuration.
	I0804 00:26:58.197720  365167 command_runner.go:130] > [crio.nri]
	I0804 00:26:58.197730  365167 command_runner.go:130] > # Globally enable or disable NRI.
	I0804 00:26:58.197737  365167 command_runner.go:130] > # enable_nri = false
	I0804 00:26:58.197747  365167 command_runner.go:130] > # NRI socket to listen on.
	I0804 00:26:58.197757  365167 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0804 00:26:58.197766  365167 command_runner.go:130] > # NRI plugin directory to use.
	I0804 00:26:58.197776  365167 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0804 00:26:58.197787  365167 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0804 00:26:58.197796  365167 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0804 00:26:58.197805  365167 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0804 00:26:58.197814  365167 command_runner.go:130] > # nri_disable_connections = false
	I0804 00:26:58.197825  365167 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0804 00:26:58.197835  365167 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0804 00:26:58.197846  365167 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0804 00:26:58.197855  365167 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0804 00:26:58.197868  365167 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0804 00:26:58.197877  365167 command_runner.go:130] > [crio.stats]
	I0804 00:26:58.197888  365167 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0804 00:26:58.197896  365167 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0804 00:26:58.197907  365167 command_runner.go:130] > # stats_collection_period = 0
	I0804 00:26:58.197955  365167 command_runner.go:130] ! time="2024-08-04 00:26:58.157519577Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0804 00:26:58.197979  365167 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0804 00:26:58.198183  365167 cni.go:84] Creating CNI manager for ""
	I0804 00:26:58.198197  365167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 00:26:58.198208  365167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:26:58.198239  365167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-453015 NodeName:multinode-453015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:26:58.198399  365167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-453015"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:26:58.198476  365167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:26:58.209296  365167 command_runner.go:130] > kubeadm
	I0804 00:26:58.209322  365167 command_runner.go:130] > kubectl
	I0804 00:26:58.209326  365167 command_runner.go:130] > kubelet
	I0804 00:26:58.209401  365167 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:26:58.209474  365167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:26:58.220800  365167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0804 00:26:58.239579  365167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:26:58.258214  365167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0804 00:26:58.277337  365167 ssh_runner.go:195] Run: grep 192.168.39.23	control-plane.minikube.internal$ /etc/hosts
	I0804 00:26:58.281681  365167 command_runner.go:130] > 192.168.39.23	control-plane.minikube.internal
	I0804 00:26:58.281778  365167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:26:58.423149  365167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:26:58.438690  365167 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015 for IP: 192.168.39.23
	I0804 00:26:58.438724  365167 certs.go:194] generating shared ca certs ...
	I0804 00:26:58.438746  365167 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:26:58.438944  365167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0804 00:26:58.438986  365167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0804 00:26:58.438997  365167 certs.go:256] generating profile certs ...
	I0804 00:26:58.439074  365167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/client.key
	I0804 00:26:58.439132  365167 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key.c0875c15
	I0804 00:26:58.439186  365167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key
	I0804 00:26:58.439197  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 00:26:58.439212  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 00:26:58.439225  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 00:26:58.439237  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 00:26:58.439250  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 00:26:58.439262  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 00:26:58.439275  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 00:26:58.439287  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 00:26:58.439342  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0804 00:26:58.439371  365167 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0804 00:26:58.439380  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:26:58.439402  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0804 00:26:58.439424  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:26:58.439448  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0804 00:26:58.439483  365167 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:26:58.439509  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.439536  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem -> /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.439562  365167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.440183  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:26:58.465710  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:26:58.491165  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:26:58.515645  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:26:58.541229  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 00:26:58.566348  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:26:58.591492  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:26:58.618106  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/multinode-453015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 00:26:58.643577  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:26:58.668184  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0804 00:26:58.692856  365167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0804 00:26:58.716363  365167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:26:58.733961  365167 ssh_runner.go:195] Run: openssl version
	I0804 00:26:58.739951  365167 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0804 00:26:58.740202  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0804 00:26:58.752457  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757728  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757785  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.757856  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0804 00:26:58.764273  365167 command_runner.go:130] > 3ec20f2e
	I0804 00:26:58.764379  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:26:58.775032  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:26:58.787381  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792162  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792386  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.792454  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:26:58.798252  365167 command_runner.go:130] > b5213941
	I0804 00:26:58.798443  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:26:58.808561  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0804 00:26:58.820328  365167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825219  365167 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825256  365167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.825305  365167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0804 00:26:58.830984  365167 command_runner.go:130] > 51391683
	I0804 00:26:58.831057  365167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0804 00:26:58.840853  365167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:26:58.845478  365167 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:26:58.845523  365167 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 00:26:58.845533  365167 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0804 00:26:58.845542  365167 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 00:26:58.845551  365167 command_runner.go:130] > Access: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845558  365167 command_runner.go:130] > Modify: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845565  365167 command_runner.go:130] > Change: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845574  365167 command_runner.go:130] >  Birth: 2024-08-04 00:20:14.585334858 +0000
	I0804 00:26:58.845648  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:26:58.851561  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.851668  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:26:58.857461  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.857589  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:26:58.863107  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.863310  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:26:58.869319  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.869530  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:26:58.875571  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.875656  365167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:26:58.881512  365167 command_runner.go:130] > Certificate will not expire
	I0804 00:26:58.881678  365167 kubeadm.go:392] StartCluster: {Name:multinode-453015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-453015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:26:58.881826  365167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:26:58.881886  365167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:26:58.926720  365167 command_runner.go:130] > 8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8
	I0804 00:26:58.926753  365167 command_runner.go:130] > 51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6
	I0804 00:26:58.926785  365167 command_runner.go:130] > eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b
	I0804 00:26:58.926881  365167 command_runner.go:130] > f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647
	I0804 00:26:58.926901  365167 command_runner.go:130] > 1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f
	I0804 00:26:58.926952  365167 command_runner.go:130] > 1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924
	I0804 00:26:58.927034  365167 command_runner.go:130] > 36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316
	I0804 00:26:58.927102  365167 command_runner.go:130] > d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4
	I0804 00:26:58.928731  365167 cri.go:89] found id: "8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8"
	I0804 00:26:58.928743  365167 cri.go:89] found id: "51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6"
	I0804 00:26:58.928747  365167 cri.go:89] found id: "eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b"
	I0804 00:26:58.928750  365167 cri.go:89] found id: "f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647"
	I0804 00:26:58.928752  365167 cri.go:89] found id: "1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f"
	I0804 00:26:58.928756  365167 cri.go:89] found id: "1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924"
	I0804 00:26:58.928758  365167 cri.go:89] found id: "36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316"
	I0804 00:26:58.928761  365167 cri.go:89] found id: "d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4"
	I0804 00:26:58.928763  365167 cri.go:89] found id: ""
	I0804 00:26:58.928813  365167 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.042384153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731463042357446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dae8d4c-e019-457f-99fe-56ef376ea6a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.043110980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4171ef1f-31cc-45ca-a23a-c29be57e1529 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.043169354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4171ef1f-31cc-45ca-a23a-c29be57e1529 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.043512891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4171ef1f-31cc-45ca-a23a-c29be57e1529 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.085542763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40b3ba70-7603-4866-a2ec-bf22604f9c7d name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.085615005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40b3ba70-7603-4866-a2ec-bf22604f9c7d name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.086712090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9630beba-6ad8-4eab-82e9-203331522e43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.087395913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731463087370139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9630beba-6ad8-4eab-82e9-203331522e43 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.088192452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa9671c9-d68f-44c3-b913-a6dc5f7f14e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.088250269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa9671c9-d68f-44c3-b913-a6dc5f7f14e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.088842343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa9671c9-d68f-44c3-b913-a6dc5f7f14e9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.132143660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ee9d033-520d-4432-a2d6-7d83f0fbe3e8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.132239422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ee9d033-520d-4432-a2d6-7d83f0fbe3e8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.133733720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb353c2a-0373-4375-826a-86263bf4e200 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.134515894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731463134487647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb353c2a-0373-4375-826a-86263bf4e200 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.135062612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16384b39-881b-4b36-b505-2c04cfcb40a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.135120893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16384b39-881b-4b36-b505-2c04cfcb40a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.135480102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16384b39-881b-4b36-b505-2c04cfcb40a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.179484322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b758bad-de64-4c29-a21d-3c608ca0c286 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.179555099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b758bad-de64-4c29-a21d-3c608ca0c286 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.180778829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8447a912-86b0-4cf9-aed2-f0ba0d2644ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.181300910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722731463181275215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8447a912-86b0-4cf9-aed2-f0ba0d2644ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.182049239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca7108d4-bec1-4f6b-908f-d7e4f2fe4d28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.182216661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca7108d4-bec1-4f6b-908f-d7e4f2fe4d28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:31:03 multinode-453015 crio[2932]: time="2024-08-04 00:31:03.182551921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ad93dd5bfe2bcd7a7f93b45756cf426ba106dc14d352ad188a3b424f423a985b,PodSandboxId:070ea9b2f8e0dbcf56c7d442d4b5af06ee9ff76ac58c37c70296b2bfd970314c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722731259091180912,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d,PodSandboxId:f4558b5067a4d3161e4c60a7efd157d8cf2ec89defca5ba5c210fec2a362b88d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722731225468826918,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652,PodSandboxId:f47797dc911e4c699c14e8bd49cc2dc16946bec965a4222aead54887e02a2b3e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722731225430682579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c,PodSandboxId:a7384d1ee21ae4f8cf09e72b3736bdd66ad8ad3d6cc2cb52e4a17d766c4c3038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722731225382201181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806d9ae2d62a8d182458ff4110689224c6d7c70478ebea837ec6ac2098be86fa,PodSandboxId:7dc8455275a3881583a1e4201c518999d667e538b50ac2a47bc573d3545482c1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722731225322524321,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kube
rnetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a,PodSandboxId:2f022f70fbb312420691f7051cdbdd2f6067ae4d02aa0340e6532318c79ca9cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722731221492413194,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c,PodSandboxId:8306bbe24a830f2db3417a92d76111e60456f80a9c2adaf30f6d17a02d629b40,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722731221489323663,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3,PodSandboxId:3630e4bedd9cd5aa858b1fd0d702455f095be55aeb8a2180145298c55d150b19,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722731221460256259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[string]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf,PodSandboxId:b96875d41283a1818ede2c935dd40781be90a4da77a3f5896d82ea83a228d5c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722731221520791002,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db5415d7d25611cd19944f1bd2d8fc3f9db0d207f7a0cc024a409a3e893582,PodSandboxId:b891fc765f03977aab5f1210841a15517d837ec1871d97001a22b24f1d763b8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722730904469676329,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qcrhw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d4f6f07-d4f1-4dd2-b3f7-557c17d54aa0,},Annotations:map[string]string{io.kubernetes.container.hash: 5fde8a76,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8,PodSandboxId:6d01779211309698c293d5149818f372f957d85f74cc4ee4adbfd5aed4cb7bf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722730851536935298,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lpfg4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89831564-c2d0-4b22-8d93-dfd59ee56c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 2e9bccc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPo
rt\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba132c389aad40e577585440ab440a572d23bda7e144c1f7653d64bd683cb6,PodSandboxId:c27297ab98f1148b9431c864c0920c099887c6d881ba2655e44fabccb5be4424,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722730851492208708,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8670b908-2c0e-4996-a2f9-32a57683749e,},Annotations:map[string]string{io.kubernetes.container.hash: c9afac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b,PodSandboxId:db0708b20133d21fa230f2771ace9de555f8070fdd23245f1b2035852dfa7e36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722730839495212089,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d625q,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b281006-ce73-4b6a-9592-1df16b7ae140,},Annotations:map[string]string{io.kubernetes.container.hash: fdf238a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647,PodSandboxId:ac74041b4c4c93b536694262cc9943d13d2b73e52d0f491248cf3cde12c50726,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722730837851720700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btrgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5a373aab-548c-491b-9ff3-7d33fc97e7e5,},Annotations:map[string]string{io.kubernetes.container.hash: 18a591b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f,PodSandboxId:256f10b9683ff80f96011cba1bc44879f926d2b6571456ee9337998e36c5ec86,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722730818309273891,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-453015,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: c105c510a7a9a8b425ca9ade0e8c30e5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316,PodSandboxId:be21249f1c8f1d42387de47959caec487e892204b275960a13e6c3b7a6407340,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722730818225093146,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 16f55b1f099a44173af53d1cb34ac46d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4,PodSandboxId:9675da07c9770a10ced9be1898c9d3d759f8652e2ee6c0997b0a0a54949891e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722730818217979600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78f7d76b76f0511ce5246e929dbbe1b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c877f80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924,PodSandboxId:8806a9012b8be0f21d882ba3c7ed6461bafbb5dfd886d21f580495a5e64ce987,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722730818264000969,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-453015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e0fab86532b68f10bada58c1ced29c0,},Annotations:map[str
ing]string{io.kubernetes.container.hash: d2e30953,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca7108d4-bec1-4f6b-908f-d7e4f2fe4d28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ad93dd5bfe2bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   070ea9b2f8e0d       busybox-fc5497c4f-qcrhw
	7bf76841f45db       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago       Running             kindnet-cni               1                   f4558b5067a4d       kindnet-d625q
	653213f7e1cf1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   f47797dc911e4       coredns-7db6d8ff4d-lpfg4
	2a1985964e07f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago       Running             kube-proxy                1                   a7384d1ee21ae       kube-proxy-btrgw
	806d9ae2d62a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   7dc8455275a38       storage-provisioner
	4d13c5d382f86       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   b96875d41283a       kube-scheduler-multinode-453015
	8190bba136cd3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   2f022f70fbb31       etcd-multinode-453015
	3e4e62d81102f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   8306bbe24a830       kube-controller-manager-multinode-453015
	a9d532c5d501a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   3630e4bedd9cd       kube-apiserver-multinode-453015
	87db5415d7d25       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   b891fc765f039       busybox-fc5497c4f-qcrhw
	8fe03d194cc67       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   6d01779211309       coredns-7db6d8ff4d-lpfg4
	51ba132c389aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   c27297ab98f11       storage-provisioner
	eda8d348bfe19       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   db0708b20133d       kindnet-d625q
	f07ab5f5f0ce9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   ac74041b4c4c9       kube-proxy-btrgw
	1a43870f80eb8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   256f10b9683ff       kube-controller-manager-multinode-453015
	1b93a7722a9db       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   8806a9012b8be       kube-apiserver-multinode-453015
	36489d3306cf4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   be21249f1c8f1       kube-scheduler-multinode-453015
	d9ce68ffecfd6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   9675da07c9770       etcd-multinode-453015
	
	
	==> coredns [653213f7e1cf11096ba726d2b4afadf89962a6395a7c280f0427695e62806652] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38783 - 59773 "HINFO IN 6596251184696092188.2619414782798662992. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011365219s
	
	
	==> coredns [8fe03d194cc67774b6ece1e77e03fac99551ef2a1ed7fe53a709c37bbca68fc8] <==
	[INFO] 10.244.0.3:39132 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001875015s
	[INFO] 10.244.0.3:35645 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171033s
	[INFO] 10.244.0.3:40695 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067736s
	[INFO] 10.244.0.3:51823 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001090872s
	[INFO] 10.244.0.3:53728 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064276s
	[INFO] 10.244.0.3:59214 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059662s
	[INFO] 10.244.0.3:53129 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057601s
	[INFO] 10.244.1.2:36858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152867s
	[INFO] 10.244.1.2:40557 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116851s
	[INFO] 10.244.1.2:33555 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109306s
	[INFO] 10.244.1.2:52895 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101585s
	[INFO] 10.244.0.3:52189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119404s
	[INFO] 10.244.0.3:58821 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072508s
	[INFO] 10.244.0.3:36432 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075175s
	[INFO] 10.244.0.3:57532 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059597s
	[INFO] 10.244.1.2:57872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145424s
	[INFO] 10.244.1.2:59714 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187549s
	[INFO] 10.244.1.2:43975 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134909s
	[INFO] 10.244.1.2:37267 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102985s
	[INFO] 10.244.0.3:35906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000071029s
	[INFO] 10.244.0.3:60307 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000044812s
	[INFO] 10.244.0.3:57522 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000036503s
	[INFO] 10.244.0.3:58416 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000028081s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-453015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-453015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=multinode-453015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_20_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:20:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-453015
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:27:04 +0000   Sun, 04 Aug 2024 00:20:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    multinode-453015
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74f961df4acb48229bf5c18464bc6732
	  System UUID:                74f961df-4acb-4822-9bf5-c18464bc6732
	  Boot ID:                    1d91e3d4-a1b5-4f22-a4a2-ffec1ee4cea0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qcrhw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 coredns-7db6d8ff4d-lpfg4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-453015                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-d625q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-453015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-453015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-btrgw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-453015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-453015 event: Registered Node multinode-453015 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-453015 status is now: NodeReady
	  Normal  Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node multinode-453015 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node multinode-453015 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node multinode-453015 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m46s                node-controller  Node multinode-453015 event: Registered Node multinode-453015 in Controller
	
	
	Name:               multinode-453015-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-453015-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=multinode-453015
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T00_27_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:27:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-453015-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:28:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:29:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:29:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:29:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 00:28:12 +0000   Sun, 04 Aug 2024 00:29:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-453015-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6b4a354da8f4026b4c64afd1e29c6b4
	  System UUID:                c6b4a354-da8f-4026-b4c6-4afd1e29c6b4
	  Boot ID:                    88bcfe57-a24e-499a-8124-bdd0de124495
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9vxzv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kindnet-vlcff              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m41s
	  kube-system                 kube-proxy-ppqhx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m41s (x2 over 9m41s)  kubelet          Node multinode-453015-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x2 over 9m41s)  kubelet          Node multinode-453015-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x2 over 9m41s)  kubelet          Node multinode-453015-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m23s                  kubelet          Node multinode-453015-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node multinode-453015-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node multinode-453015-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node multinode-453015-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-453015-m02 status is now: NodeReady
	  Normal  NodeNotReady             96s                    node-controller  Node multinode-453015-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.074250] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183240] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.151024] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.284642] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.328926] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.062936] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.563772] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.433412] kauditd_printk_skb: 52 callbacks suppressed
	[  +6.104143] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.103279] kauditd_printk_skb: 35 callbacks suppressed
	[ +13.194416] systemd-fstab-generator[1465]: Ignoring "noauto" option for root device
	[  +0.129242] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.261052] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 4 00:21] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 4 00:26] systemd-fstab-generator[2786]: Ignoring "noauto" option for root device
	[  +0.164419] systemd-fstab-generator[2798]: Ignoring "noauto" option for root device
	[  +0.185134] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
	[  +0.149312] systemd-fstab-generator[2824]: Ignoring "noauto" option for root device
	[  +0.397616] systemd-fstab-generator[2917]: Ignoring "noauto" option for root device
	[  +0.776340] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +2.223066] systemd-fstab-generator[3156]: Ignoring "noauto" option for root device
	[Aug 4 00:27] kauditd_printk_skb: 189 callbacks suppressed
	[ +11.846102] systemd-fstab-generator[3973]: Ignoring "noauto" option for root device
	[  +0.108228] kauditd_printk_skb: 32 callbacks suppressed
	[ +21.856768] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [8190bba136cd342f8c19887dfef915a2a91f12974ce7856cd46f382a371ee42a] <==
	{"level":"info","ts":"2024-08-04T00:27:01.922118Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","added-peer-id":"c6baa4636f442c95","added-peer-peer-urls":["https://192.168.39.23:2380"]}
	{"level":"info","ts":"2024-08-04T00:27:01.922278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:27:01.922319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:27:01.923323Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:27:01.923525Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c6baa4636f442c95","initial-advertise-peer-urls":["https://192.168.39.23:2380"],"listen-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:27:01.923544Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:27:01.923607Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:27:01.923613Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:27:03.073124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.073227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.073295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 received MsgPreVoteResp from c6baa4636f442c95 at term 2"}
	{"level":"info","ts":"2024-08-04T00:27:03.07333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 received MsgVoteResp from c6baa4636f442c95 at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.073415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c6baa4636f442c95 elected leader c6baa4636f442c95 at term 3"}
	{"level":"info","ts":"2024-08-04T00:27:03.078841Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c6baa4636f442c95","local-member-attributes":"{Name:multinode-453015 ClientURLs:[https://192.168.39.23:2379]}","request-path":"/0/members/c6baa4636f442c95/attributes","cluster-id":"7d4cc2b8d7236707","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:27:03.079068Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:27:03.07917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:27:03.08111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:27:03.08306Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:27:03.083095Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:27:03.084612Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.23:2379"}
	{"level":"warn","ts":"2024-08-04T00:28:22.913759Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.486925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:28:22.913972Z","caller":"traceutil/trace.go:171","msg":"trace[1032190154] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1162; }","duration":"110.779546ms","start":"2024-08-04T00:28:22.803175Z","end":"2024-08-04T00:28:22.913954Z","steps":["trace[1032190154] 'range keys from in-memory index tree'  (duration: 110.472017ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:28:22.913827Z","caller":"traceutil/trace.go:171","msg":"trace[1055889212] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"218.817263ms","start":"2024-08-04T00:28:22.694933Z","end":"2024-08-04T00:28:22.913751Z","steps":["trace[1055889212] 'process raft request'  (duration: 213.60492ms)"],"step_count":1}
	
	
	==> etcd [d9ce68ffecfd65d784af12ed4f604e9687fbd5d369d828ea8e692596dd8a06f4] <==
	{"level":"info","ts":"2024-08-04T00:20:18.999545Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:20:19.008098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008199Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008222Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:20:19.008275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:19.008299Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:20:19.017759Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:20:19.044397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.23:2379"}
	{"level":"info","ts":"2024-08-04T00:21:22.804216Z","caller":"traceutil/trace.go:171","msg":"trace[324967981] transaction","detail":"{read_only:false; response_revision:489; number_of_response:1; }","duration":"116.105416ms","start":"2024-08-04T00:21:22.688098Z","end":"2024-08-04T00:21:22.804204Z","steps":["trace[324967981] 'process raft request'  (duration: 115.830866ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:21:22.80417Z","caller":"traceutil/trace.go:171","msg":"trace[782838819] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"154.937041ms","start":"2024-08-04T00:21:22.649198Z","end":"2024-08-04T00:21:22.804135Z","steps":["trace[782838819] 'process raft request'  (duration: 120.45226ms)","trace[782838819] 'compare'  (duration: 34.102652ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T00:22:13.95847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.666068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-453015-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T00:22:13.95932Z","caller":"traceutil/trace.go:171","msg":"trace[158503331] range","detail":"{range_begin:/registry/minions/multinode-453015-m03; range_end:; response_count:0; response_revision:620; }","duration":"179.548116ms","start":"2024-08-04T00:22:13.779695Z","end":"2024-08-04T00:22:13.959244Z","steps":["trace[158503331] 'range keys from in-memory index tree'  (duration: 178.603739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:22:13.958471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.636032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-453015-m03.17e85ea820fffed1\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-08-04T00:22:13.962835Z","caller":"traceutil/trace.go:171","msg":"trace[1635718525] range","detail":"{range_begin:/registry/events/default/multinode-453015-m03.17e85ea820fffed1; range_end:; response_count:1; response_revision:620; }","duration":"174.071127ms","start":"2024-08-04T00:22:13.788742Z","end":"2024-08-04T00:22:13.962814Z","steps":["trace[1635718525] 'range keys from in-memory index tree'  (duration: 169.530423ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:22:13.958778Z","caller":"traceutil/trace.go:171","msg":"trace[633275549] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"169.908515ms","start":"2024-08-04T00:22:13.788835Z","end":"2024-08-04T00:22:13.958743Z","steps":["trace[633275549] 'process raft request'  (duration: 169.393366ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T00:25:25.402776Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:25:25.402969Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-453015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	{"level":"warn","ts":"2024-08-04T00:25:25.403164Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.403327Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.500901Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:25:25.500978Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:25:25.501121Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c6baa4636f442c95","current-leader-member-id":"c6baa4636f442c95"}
	{"level":"info","ts":"2024-08-04T00:25:25.504132Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:25:25.504574Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-08-04T00:25:25.504666Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-453015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	
	
	==> kernel <==
	 00:31:03 up 11 min,  0 users,  load average: 0.04, 0.13, 0.09
	Linux multinode-453015 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7bf76841f45db2261dcd36e0b235b8049beabce95bb00dbd02deb093f4f54a8d] <==
	I0804 00:29:56.518553       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:06.516288       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:06.516466       1 main.go:299] handling current node
	I0804 00:30:06.516522       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:06.516540       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:16.519567       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:16.519738       1 main.go:299] handling current node
	I0804 00:30:16.519770       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:16.519789       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:26.519324       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:26.519424       1 main.go:299] handling current node
	I0804 00:30:26.519454       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:26.519472       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:36.523318       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:36.523490       1 main.go:299] handling current node
	I0804 00:30:36.523523       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:36.523557       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:46.515745       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:46.515800       1 main.go:299] handling current node
	I0804 00:30:46.515814       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:46.515820       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:30:56.519684       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:30:56.519791       1 main.go:299] handling current node
	I0804 00:30:56.519832       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:30:56.519853       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [eda8d348bfe19c2aa29c6496265710aa44959d0115a2162cb9c3eff5cfdd916b] <==
	I0804 00:24:40.602263       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:24:50.603811       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:24:50.603923       1 main.go:299] handling current node
	I0804 00:24:50.603954       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:24:50.603973       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:24:50.604195       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:24:50.604283       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:00.601776       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:00.601806       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:25:00.601952       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:00.601957       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:00.602077       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:00.602086       1 main.go:299] handling current node
	I0804 00:25:10.611142       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:10.611350       1 main.go:299] handling current node
	I0804 00:25:10.611392       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:10.611413       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	I0804 00:25:10.611563       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:10.611584       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:20.607148       1 main.go:295] Handling node with IPs: map[192.168.39.134:{}]
	I0804 00:25:20.607288       1 main.go:322] Node multinode-453015-m03 has CIDR [10.244.3.0/24] 
	I0804 00:25:20.607456       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0804 00:25:20.607480       1 main.go:299] handling current node
	I0804 00:25:20.607502       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0804 00:25:20.607517       1 main.go:322] Node multinode-453015-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1b93a7722a9dbbd83d21f2674549a2e84cd9cb29a27affdedc1357bae8ea3924] <==
	E0804 00:21:45.871122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51768: use of closed network connection
	E0804 00:21:46.073379       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51784: use of closed network connection
	E0804 00:21:46.250334       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51802: use of closed network connection
	E0804 00:21:46.457977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51806: use of closed network connection
	E0804 00:21:46.629004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51818: use of closed network connection
	E0804 00:21:46.796802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51842: use of closed network connection
	E0804 00:21:47.087743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51878: use of closed network connection
	E0804 00:21:47.255307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51894: use of closed network connection
	E0804 00:21:47.427272       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51914: use of closed network connection
	E0804 00:21:47.590829       1 conn.go:339] Error on socket receive: read tcp 192.168.39.23:8443->192.168.39.1:51946: use of closed network connection
	I0804 00:25:25.405097       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0804 00:25:25.424205       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424308       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424349       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424415       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.424457       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434737       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434852       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434924       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.434979       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435118       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435179       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.435225       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:25:25.436462       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0804 00:25:25.436933       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [a9d532c5d501ac2265badd42aa1b39d0df3b745307970890571e363c6b8d2ba3] <==
	I0804 00:27:04.603653       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:27:04.605473       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:27:04.605541       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:27:04.618127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:27:04.619751       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:27:04.620449       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:27:04.620517       1 policy_source.go:224] refreshing policies
	I0804 00:27:04.632604       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:27:04.644705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 00:27:04.646411       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:27:04.651680       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:27:04.651919       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:27:04.651984       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:27:04.652061       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:27:04.652115       1 cache.go:39] Caches are synced for autoregister controller
	E0804 00:27:04.661992       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 00:27:04.709958       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:27:05.510812       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:27:06.772235       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:27:06.903337       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:27:06.915614       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:27:06.992534       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:27:07.005585       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:27:17.622314       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:27:17.822814       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1a43870f80eb82e81a96d13dff6557b3a354f50f1413b931a23cb16bbd97d58f] <==
	I0804 00:20:52.234607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="208.645µs"
	I0804 00:21:22.807068       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m02\" does not exist"
	I0804 00:21:22.859637       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m02" podCIDRs=["10.244.1.0/24"]
	I0804 00:21:26.553400       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-453015-m02"
	I0804 00:21:40.961288       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:21:43.212415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.685389ms"
	I0804 00:21:43.237304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.759597ms"
	I0804 00:21:43.239345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.971µs"
	I0804 00:21:44.739157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.29979ms"
	I0804 00:21:44.739239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.881µs"
	I0804 00:21:45.414642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.114615ms"
	I0804 00:21:45.415340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.33µs"
	I0804 00:22:13.960226       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:22:13.961844       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:22:13.972404       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.2.0/24"]
	I0804 00:22:16.572154       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-453015-m03"
	I0804 00:22:32.574007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:01.459774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:02.670768       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:23:02.670829       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:02.696694       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.3.0/24"]
	I0804 00:23:19.772486       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:23:56.628991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m03"
	I0804 00:23:56.686486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.99642ms"
	I0804 00:23:56.686584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.145µs"
	
	
	==> kube-controller-manager [3e4e62d81102f34e11fa34db9a7a41725395c55a9425db460cf3e8bb0acf887c] <==
	I0804 00:27:41.606796       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m02" podCIDRs=["10.244.1.0/24"]
	I0804 00:27:43.476596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.172µs"
	I0804 00:27:43.521449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.013µs"
	I0804 00:27:43.530776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.416µs"
	I0804 00:27:43.556960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.123µs"
	I0804 00:27:43.566118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.697µs"
	I0804 00:27:43.571422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.557µs"
	I0804 00:27:47.635301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.907µs"
	I0804 00:27:59.334216       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:27:59.356772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.334µs"
	I0804 00:27:59.374382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.248µs"
	I0804 00:28:01.111762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.929509ms"
	I0804 00:28:01.111847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.198µs"
	I0804 00:28:17.613895       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:28:18.624343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:28:18.625007       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-453015-m03\" does not exist"
	I0804 00:28:18.644761       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-453015-m03" podCIDRs=["10.244.2.0/24"]
	I0804 00:28:36.376717       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m03"
	I0804 00:28:41.982479       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-453015-m02"
	I0804 00:29:27.643890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.420027ms"
	I0804 00:29:27.643982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.297µs"
	I0804 00:29:57.586892       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-sg5st"
	I0804 00:29:57.610581       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-sg5st"
	I0804 00:29:57.610672       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j96j8"
	I0804 00:29:57.634891       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j96j8"
	
	
	==> kube-proxy [2a1985964e07f1b0ec8a16d463cdbd0656895e2b7cec6cfe063d7bc763be9d1c] <==
	I0804 00:27:05.686052       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:27:05.698687       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.23"]
	I0804 00:27:05.790978       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:27:05.791076       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:27:05.791095       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:27:05.795823       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:27:05.796121       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:27:05.796153       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:27:05.797761       1 config.go:192] "Starting service config controller"
	I0804 00:27:05.797834       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:27:05.797881       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:27:05.797885       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:27:05.798528       1 config.go:319] "Starting node config controller"
	I0804 00:27:05.798561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:27:05.898507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:27:05.898590       1 shared_informer.go:320] Caches are synced for node config
	I0804 00:27:05.898599       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f07ab5f5f0ce98c507ad4d4ab9f217a9618d6d6aad54dc5b898c31138096c647] <==
	I0804 00:20:38.357556       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:20:38.372719       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.23"]
	I0804 00:20:38.429531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:20:38.429631       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:20:38.429650       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:20:38.433363       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:20:38.433927       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:20:38.433946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:20:38.435792       1 config.go:192] "Starting service config controller"
	I0804 00:20:38.436167       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:20:38.436237       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:20:38.436257       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:20:38.438489       1 config.go:319] "Starting node config controller"
	I0804 00:20:38.439219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:20:38.536756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:20:38.536834       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:20:38.539306       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [36489d3306cf422666cee90420944efbc38e508a4ac7333f638bb884e7a39316] <==
	E0804 00:20:22.033738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.053531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:20:22.053586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 00:20:22.118499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:20:22.118641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 00:20:22.125971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 00:20:22.126000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 00:20:22.185310       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:22.185338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.198474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:20:22.198564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 00:20:22.211709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 00:20:22.211830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0804 00:20:22.239749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:20:22.239890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 00:20:22.312496       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 00:20:22.312608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 00:20:22.318881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:20:22.319088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 00:20:22.332339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:20:22.332479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 00:20:22.495197       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:20:22.495290       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 00:20:25.119585       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 00:25:25.408253       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4d13c5d382f86ef49e9b874d50348d7b5904cee4e2280a8369ee799c0cfcf6bf] <==
	I0804 00:27:03.066333       1 serving.go:380] Generated self-signed cert in-memory
	I0804 00:27:04.655527       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:27:04.655608       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:27:04.670069       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0804 00:27:04.672133       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0804 00:27:04.672276       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:27:04.672356       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:27:04.672390       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0804 00:27:04.672450       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0804 00:27:04.673599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:27:04.673536       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:27:04.773913       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0804 00:27:04.775359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:27:04.775970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810343    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-xtables-lock\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810512    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-cni-cfg\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810599    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b281006-ce73-4b6a-9592-1df16b7ae140-lib-modules\") pod \"kindnet-d625q\" (UID: \"6b281006-ce73-4b6a-9592-1df16b7ae140\") " pod="kube-system/kindnet-d625q"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810699    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8670b908-2c0e-4996-a2f9-32a57683749e-tmp\") pod \"storage-provisioner\" (UID: \"8670b908-2c0e-4996-a2f9-32a57683749e\") " pod="kube-system/storage-provisioner"
	Aug 04 00:27:04 multinode-453015 kubelet[3163]: I0804 00:27:04.810916    3163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a373aab-548c-491b-9ff3-7d33fc97e7e5-xtables-lock\") pod \"kube-proxy-btrgw\" (UID: \"5a373aab-548c-491b-9ff3-7d33fc97e7e5\") " pod="kube-system/kube-proxy-btrgw"
	Aug 04 00:28:00 multinode-453015 kubelet[3163]: E0804 00:28:00.849334    3163 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:28:00 multinode-453015 kubelet[3163]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:29:00 multinode-453015 kubelet[3163]: E0804 00:29:00.843717    3163 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:29:00 multinode-453015 kubelet[3163]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:29:00 multinode-453015 kubelet[3163]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:29:00 multinode-453015 kubelet[3163]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:29:00 multinode-453015 kubelet[3163]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:30:00 multinode-453015 kubelet[3163]: E0804 00:30:00.846343    3163 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:30:00 multinode-453015 kubelet[3163]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:30:00 multinode-453015 kubelet[3163]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:30:00 multinode-453015 kubelet[3163]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:30:00 multinode-453015 kubelet[3163]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 00:31:00 multinode-453015 kubelet[3163]: E0804 00:31:00.844961    3163 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 00:31:00 multinode-453015 kubelet[3163]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 00:31:00 multinode-453015 kubelet[3163]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 00:31:00 multinode-453015 kubelet[3163]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 00:31:00 multinode-453015 kubelet[3163]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:31:02.763455  367016 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19370-323890/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-453015 -n multinode-453015
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-453015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.46s)

                                                
                                    
x
+
TestPreload (271.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-859690 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-859690 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m10.124377701s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-859690 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-859690 image pull gcr.io/k8s-minikube/busybox: (1.060076841s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-859690
E0804 00:37:24.418096  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-859690: exit status 82 (2m0.477178136s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-859690"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-859690 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-04 00:39:13.108887728 +0000 UTC m=+5780.976045996
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-859690 -n test-preload-859690
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-859690 -n test-preload-859690: exit status 3 (18.492972295s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 00:39:31.597912  369920 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host
	E0804 00:39:31.597934  369920 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.45:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-859690" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-859690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-859690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-859690: (1.12967991s)
--- FAIL: TestPreload (271.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (466.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0804 00:42:07.466363  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 00:42:24.416942  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m34.471966648s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055939] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-055939" primary control-plane node in "kubernetes-upgrade-055939" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:41:37.942418  373387 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:41:37.942615  373387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:41:37.942622  373387 out.go:304] Setting ErrFile to fd 2...
	I0804 00:41:37.942627  373387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:41:37.942806  373387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:41:37.943399  373387 out.go:298] Setting JSON to false
	I0804 00:41:37.944336  373387 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":33846,"bootTime":1722698252,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:41:37.944398  373387 start.go:139] virtualization: kvm guest
	I0804 00:41:37.946597  373387 out.go:177] * [kubernetes-upgrade-055939] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:41:37.947892  373387 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:41:37.947886  373387 notify.go:220] Checking for updates...
	I0804 00:41:37.949524  373387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:41:37.950958  373387 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:41:37.952423  373387 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:41:37.953767  373387 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:41:37.954866  373387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:41:37.956430  373387 config.go:182] Loaded profile config "NoKubernetes-419151": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:41:37.956519  373387 config.go:182] Loaded profile config "force-systemd-env-439963": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:41:37.956597  373387 config.go:182] Loaded profile config "offline-crio-404249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:41:37.956688  373387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:41:37.990270  373387 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:41:37.991625  373387 start.go:297] selected driver: kvm2
	I0804 00:41:37.991637  373387 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:41:37.991658  373387 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:41:37.992359  373387 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:41:37.992435  373387 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:41:38.008698  373387 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:41:38.008746  373387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:41:38.008967  373387 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:41:38.009030  373387 cni.go:84] Creating CNI manager for ""
	I0804 00:41:38.009043  373387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:41:38.009057  373387 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:41:38.009110  373387 start.go:340] cluster config:
	{Name:kubernetes-upgrade-055939 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:41:38.009217  373387 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:41:38.011537  373387 out.go:177] * Starting "kubernetes-upgrade-055939" primary control-plane node in "kubernetes-upgrade-055939" cluster
	I0804 00:41:38.013246  373387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:41:38.013298  373387 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:41:38.013313  373387 cache.go:56] Caching tarball of preloaded images
	I0804 00:41:38.013428  373387 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:41:38.013444  373387 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0804 00:41:38.013578  373387 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/config.json ...
	I0804 00:41:38.013605  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/config.json: {Name:mkef953abc64464c2913998873802afd9ac28bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:41:38.013793  373387 start.go:360] acquireMachinesLock for kubernetes-upgrade-055939: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:42:41.358845  373387 start.go:364] duration metric: took 1m3.345019907s to acquireMachinesLock for "kubernetes-upgrade-055939"
	I0804 00:42:41.358947  373387 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-055939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:42:41.359076  373387 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:42:41.361076  373387 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 00:42:41.361291  373387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:42:41.361339  373387 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:42:41.381097  373387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0804 00:42:41.381591  373387 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:42:41.382408  373387 main.go:141] libmachine: Using API Version  1
	I0804 00:42:41.382436  373387 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:42:41.382917  373387 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:42:41.383165  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetMachineName
	I0804 00:42:41.383353  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:42:41.383522  373387 start.go:159] libmachine.API.Create for "kubernetes-upgrade-055939" (driver="kvm2")
	I0804 00:42:41.383557  373387 client.go:168] LocalClient.Create starting
	I0804 00:42:41.383609  373387 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0804 00:42:41.383650  373387 main.go:141] libmachine: Decoding PEM data...
	I0804 00:42:41.383669  373387 main.go:141] libmachine: Parsing certificate...
	I0804 00:42:41.383739  373387 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0804 00:42:41.383768  373387 main.go:141] libmachine: Decoding PEM data...
	I0804 00:42:41.383791  373387 main.go:141] libmachine: Parsing certificate...
	I0804 00:42:41.383821  373387 main.go:141] libmachine: Running pre-create checks...
	I0804 00:42:41.383835  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .PreCreateCheck
	I0804 00:42:41.384488  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetConfigRaw
	I0804 00:42:41.384950  373387 main.go:141] libmachine: Creating machine...
	I0804 00:42:41.384968  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Create
	I0804 00:42:41.385120  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Creating KVM machine...
	I0804 00:42:41.386757  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found existing default KVM network
	I0804 00:42:41.388532  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.388347  374179 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:aa:7d} reservation:<nil>}
	I0804 00:42:41.391107  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.390968  374179 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0804 00:42:41.392103  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.392008  374179 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:c8:e8} reservation:<nil>}
	I0804 00:42:41.393046  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.392958  374179 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000385220}
	I0804 00:42:41.393071  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | created network xml: 
	I0804 00:42:41.393083  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | <network>
	I0804 00:42:41.393093  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   <name>mk-kubernetes-upgrade-055939</name>
	I0804 00:42:41.393106  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   <dns enable='no'/>
	I0804 00:42:41.393120  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   
	I0804 00:42:41.393132  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0804 00:42:41.393152  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |     <dhcp>
	I0804 00:42:41.393167  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0804 00:42:41.393181  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |     </dhcp>
	I0804 00:42:41.393193  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   </ip>
	I0804 00:42:41.393206  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG |   
	I0804 00:42:41.393218  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | </network>
	I0804 00:42:41.393236  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | 
	I0804 00:42:41.399364  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | trying to create private KVM network mk-kubernetes-upgrade-055939 192.168.72.0/24...
	I0804 00:42:41.484896  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | private KVM network mk-kubernetes-upgrade-055939 192.168.72.0/24 created
	I0804 00:42:41.484943  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939 ...
	I0804 00:42:41.484962  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:42:41.493706  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.484862  374179 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:42:41.493748  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:42:41.779787  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:41.779602  374179 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa...
	I0804 00:42:42.155591  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:42.155452  374179 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/kubernetes-upgrade-055939.rawdisk...
	I0804 00:42:42.155619  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Writing magic tar header
	I0804 00:42:42.155637  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Writing SSH key tar header
	I0804 00:42:42.155655  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:42.155629  374179 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939 ...
	I0804 00:42:42.155799  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939
	I0804 00:42:42.155852  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939 (perms=drwx------)
	I0804 00:42:42.155867  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0804 00:42:42.155888  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:42:42.155898  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0804 00:42:42.155910  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:42:42.155923  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:42:42.155943  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0804 00:42:42.155959  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0804 00:42:42.155972  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:42:42.155988  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:42:42.156000  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Creating domain...
	I0804 00:42:42.156013  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:42:42.156030  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Checking permissions on dir: /home
	I0804 00:42:42.156041  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Skipping /home - not owner
	I0804 00:42:42.157323  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) define libvirt domain using xml: 
	I0804 00:42:42.157352  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) <domain type='kvm'>
	I0804 00:42:42.157364  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <name>kubernetes-upgrade-055939</name>
	I0804 00:42:42.157376  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <memory unit='MiB'>2200</memory>
	I0804 00:42:42.157387  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <vcpu>2</vcpu>
	I0804 00:42:42.157394  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <features>
	I0804 00:42:42.157406  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <acpi/>
	I0804 00:42:42.157411  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <apic/>
	I0804 00:42:42.157417  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <pae/>
	I0804 00:42:42.157421  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     
	I0804 00:42:42.157427  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   </features>
	I0804 00:42:42.157434  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <cpu mode='host-passthrough'>
	I0804 00:42:42.157442  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   
	I0804 00:42:42.157450  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   </cpu>
	I0804 00:42:42.157460  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <os>
	I0804 00:42:42.157467  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <type>hvm</type>
	I0804 00:42:42.157476  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <boot dev='cdrom'/>
	I0804 00:42:42.157482  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <boot dev='hd'/>
	I0804 00:42:42.157488  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <bootmenu enable='no'/>
	I0804 00:42:42.157492  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   </os>
	I0804 00:42:42.157497  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   <devices>
	I0804 00:42:42.157543  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <disk type='file' device='cdrom'>
	I0804 00:42:42.157566  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/boot2docker.iso'/>
	I0804 00:42:42.157579  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <target dev='hdc' bus='scsi'/>
	I0804 00:42:42.157586  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <readonly/>
	I0804 00:42:42.157594  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </disk>
	I0804 00:42:42.157601  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <disk type='file' device='disk'>
	I0804 00:42:42.157612  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:42:42.157623  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/kubernetes-upgrade-055939.rawdisk'/>
	I0804 00:42:42.157633  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <target dev='hda' bus='virtio'/>
	I0804 00:42:42.157662  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </disk>
	I0804 00:42:42.157673  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <interface type='network'>
	I0804 00:42:42.157682  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <source network='mk-kubernetes-upgrade-055939'/>
	I0804 00:42:42.157691  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <model type='virtio'/>
	I0804 00:42:42.157697  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </interface>
	I0804 00:42:42.157716  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <interface type='network'>
	I0804 00:42:42.157724  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <source network='default'/>
	I0804 00:42:42.157741  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <model type='virtio'/>
	I0804 00:42:42.157748  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </interface>
	I0804 00:42:42.157758  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <serial type='pty'>
	I0804 00:42:42.157765  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <target port='0'/>
	I0804 00:42:42.157773  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </serial>
	I0804 00:42:42.157831  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <console type='pty'>
	I0804 00:42:42.157841  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <target type='serial' port='0'/>
	I0804 00:42:42.157856  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </console>
	I0804 00:42:42.157875  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     <rng model='virtio'>
	I0804 00:42:42.157903  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)       <backend model='random'>/dev/random</backend>
	I0804 00:42:42.157917  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     </rng>
	I0804 00:42:42.157926  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     
	I0804 00:42:42.157933  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)     
	I0804 00:42:42.157944  373387 main.go:141] libmachine: (kubernetes-upgrade-055939)   </devices>
	I0804 00:42:42.157970  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) </domain>
	I0804 00:42:42.157993  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) 
	I0804 00:42:42.162783  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:d1:af:f4 in network default
	I0804 00:42:42.163754  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:42.163791  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Ensuring networks are active...
	I0804 00:42:42.164654  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Ensuring network default is active
	I0804 00:42:42.165126  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Ensuring network mk-kubernetes-upgrade-055939 is active
	I0804 00:42:42.165761  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Getting domain xml...
	I0804 00:42:42.166750  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Creating domain...
	I0804 00:42:43.572436  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Waiting to get IP...
	I0804 00:42:43.573475  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:43.573971  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:43.574018  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:43.573968  374179 retry.go:31] will retry after 309.288657ms: waiting for machine to come up
	I0804 00:42:43.884599  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:43.885318  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:43.885343  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:43.885222  374179 retry.go:31] will retry after 295.112736ms: waiting for machine to come up
	I0804 00:42:44.181794  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.182282  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.182302  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:44.182230  374179 retry.go:31] will retry after 399.908607ms: waiting for machine to come up
	I0804 00:42:44.584036  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.584313  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.584345  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:44.584284  374179 retry.go:31] will retry after 408.165038ms: waiting for machine to come up
	I0804 00:42:44.993835  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.994347  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:44.994372  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:44.994310  374179 retry.go:31] will retry after 590.672786ms: waiting for machine to come up
	I0804 00:42:45.587106  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:45.587631  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:45.587672  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:45.587570  374179 retry.go:31] will retry after 890.891317ms: waiting for machine to come up
	I0804 00:42:46.480208  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:46.480755  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:46.480792  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:46.480696  374179 retry.go:31] will retry after 889.869593ms: waiting for machine to come up
	I0804 00:42:47.372140  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:47.372691  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:47.372727  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:47.372631  374179 retry.go:31] will retry after 1.028567255s: waiting for machine to come up
	I0804 00:42:48.402533  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:48.403087  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:48.403120  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:48.403011  374179 retry.go:31] will retry after 1.275345299s: waiting for machine to come up
	I0804 00:42:49.680713  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:49.681268  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:49.681298  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:49.681207  374179 retry.go:31] will retry after 2.262979933s: waiting for machine to come up
	I0804 00:42:51.945811  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:51.946327  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:51.946365  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:51.946272  374179 retry.go:31] will retry after 1.817413813s: waiting for machine to come up
	I0804 00:42:53.766255  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:53.766702  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:53.766722  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:53.766661  374179 retry.go:31] will retry after 2.958912009s: waiting for machine to come up
	I0804 00:42:56.726901  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:42:56.727420  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:42:56.727448  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:42:56.727371  374179 retry.go:31] will retry after 3.899631424s: waiting for machine to come up
	I0804 00:43:00.630701  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:00.631185  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find current IP address of domain kubernetes-upgrade-055939 in network mk-kubernetes-upgrade-055939
	I0804 00:43:00.631212  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | I0804 00:43:00.631148  374179 retry.go:31] will retry after 4.701196238s: waiting for machine to come up
	I0804 00:43:05.336200  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.336725  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Found IP for machine: 192.168.72.118
	I0804 00:43:05.336771  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has current primary IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.336787  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Reserving static IP address...
	I0804 00:43:05.337258  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-055939", mac: "52:54:00:b8:66:f0", ip: "192.168.72.118"} in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.416447  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Getting to WaitForSSH function...
	I0804 00:43:05.416481  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Reserved static IP address: 192.168.72.118
	I0804 00:43:05.416496  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Waiting for SSH to be available...
	I0804 00:43:05.419592  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.420032  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:05.420067  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.420180  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Using SSH client type: external
	I0804 00:43:05.420202  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa (-rw-------)
	I0804 00:43:05.420222  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:43:05.420255  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | About to run SSH command:
	I0804 00:43:05.420289  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | exit 0
	I0804 00:43:05.545808  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | SSH cmd err, output: <nil>: 
	I0804 00:43:05.546052  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) KVM machine creation complete!
	I0804 00:43:05.546411  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetConfigRaw
	I0804 00:43:05.547018  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:05.547234  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:05.547406  373387 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:43:05.547425  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetState
	I0804 00:43:05.548755  373387 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:43:05.548771  373387 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:43:05.548778  373387 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:43:05.548787  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:05.551040  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.551460  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:05.551494  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.551639  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:05.551837  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.551997  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.552137  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:05.552318  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:05.552596  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:05.552615  373387 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:43:05.661177  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:43:05.661208  373387 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:43:05.661217  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:05.664090  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.664509  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:05.664551  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.664697  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:05.664929  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.665067  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.665247  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:05.665462  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:05.665721  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:05.665745  373387 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:43:05.774357  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:43:05.774462  373387 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:43:05.774475  373387 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:43:05.774486  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetMachineName
	I0804 00:43:05.774783  373387 buildroot.go:166] provisioning hostname "kubernetes-upgrade-055939"
	I0804 00:43:05.774809  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetMachineName
	I0804 00:43:05.775012  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:05.777854  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.778206  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:05.778240  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.778387  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:05.778586  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.778724  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.778835  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:05.779021  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:05.779199  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:05.779211  373387 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-055939 && echo "kubernetes-upgrade-055939" | sudo tee /etc/hostname
	I0804 00:43:05.905599  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-055939
	
	I0804 00:43:05.905629  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:05.908498  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.908939  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:05.908995  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:05.909180  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:05.909382  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.909591  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:05.909780  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:05.910034  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:05.910276  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:05.910296  373387 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-055939' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-055939/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-055939' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:43:06.028205  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:43:06.028240  373387 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0804 00:43:06.028269  373387 buildroot.go:174] setting up certificates
	I0804 00:43:06.028282  373387 provision.go:84] configureAuth start
	I0804 00:43:06.028309  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetMachineName
	I0804 00:43:06.028633  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetIP
	I0804 00:43:06.031772  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.032165  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.032223  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.032458  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.034800  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.035172  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.035200  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.035330  373387 provision.go:143] copyHostCerts
	I0804 00:43:06.035396  373387 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0804 00:43:06.035407  373387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:43:06.035459  373387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0804 00:43:06.035537  373387 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0804 00:43:06.035545  373387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:43:06.035564  373387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0804 00:43:06.035618  373387 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0804 00:43:06.035625  373387 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:43:06.035643  373387 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0804 00:43:06.035684  373387 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-055939 san=[127.0.0.1 192.168.72.118 kubernetes-upgrade-055939 localhost minikube]
	I0804 00:43:06.169925  373387 provision.go:177] copyRemoteCerts
	I0804 00:43:06.169987  373387 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:43:06.170014  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.173150  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.173491  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.173563  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.173754  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.173971  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.174143  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.174304  373387 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:43:06.260223  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0804 00:43:06.289150  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0804 00:43:06.314666  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 00:43:06.339939  373387 provision.go:87] duration metric: took 311.640029ms to configureAuth
	I0804 00:43:06.339977  373387 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:43:06.340168  373387 config.go:182] Loaded profile config "kubernetes-upgrade-055939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:43:06.340247  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.342798  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.343049  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.343091  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.343228  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.343443  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.343612  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.343749  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.343913  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:06.344080  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:06.344093  373387 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:43:06.624718  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:43:06.624748  373387 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:43:06.624760  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetURL
	I0804 00:43:06.626357  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Using libvirt version 6000000
	I0804 00:43:06.628686  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.629084  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.629116  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.629256  373387 main.go:141] libmachine: Docker is up and running!
	I0804 00:43:06.629272  373387 main.go:141] libmachine: Reticulating splines...
	I0804 00:43:06.629281  373387 client.go:171] duration metric: took 25.245714924s to LocalClient.Create
	I0804 00:43:06.629311  373387 start.go:167] duration metric: took 25.245790662s to libmachine.API.Create "kubernetes-upgrade-055939"
	I0804 00:43:06.629324  373387 start.go:293] postStartSetup for "kubernetes-upgrade-055939" (driver="kvm2")
	I0804 00:43:06.629337  373387 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:43:06.629360  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:06.629712  373387 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:43:06.629749  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.632062  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.632393  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.632430  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.632634  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.632870  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.633064  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.633240  373387 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:43:06.719676  373387 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:43:06.724531  373387 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:43:06.724561  373387 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0804 00:43:06.724638  373387 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0804 00:43:06.724757  373387 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0804 00:43:06.724873  373387 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:43:06.735410  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:43:06.762542  373387 start.go:296] duration metric: took 133.202355ms for postStartSetup
	I0804 00:43:06.762592  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetConfigRaw
	I0804 00:43:06.763203  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetIP
	I0804 00:43:06.765931  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.766292  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.766315  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.766609  373387 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/config.json ...
	I0804 00:43:06.766815  373387 start.go:128] duration metric: took 25.407725265s to createHost
	I0804 00:43:06.766840  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.769184  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.769574  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.769611  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.769734  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.769922  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.770119  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.770278  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.770512  373387 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:06.770736  373387 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.118 22 <nil> <nil>}
	I0804 00:43:06.770750  373387 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:43:06.878576  373387 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722732186.843507396
	
	I0804 00:43:06.878605  373387 fix.go:216] guest clock: 1722732186.843507396
	I0804 00:43:06.878615  373387 fix.go:229] Guest: 2024-08-04 00:43:06.843507396 +0000 UTC Remote: 2024-08-04 00:43:06.766827254 +0000 UTC m=+88.860594454 (delta=76.680142ms)
	I0804 00:43:06.878638  373387 fix.go:200] guest clock delta is within tolerance: 76.680142ms
	I0804 00:43:06.878643  373387 start.go:83] releasing machines lock for "kubernetes-upgrade-055939", held for 25.519736217s
	I0804 00:43:06.878667  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:06.879007  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetIP
	I0804 00:43:06.882122  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.882504  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.882534  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.882671  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:06.883216  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:06.883427  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:43:06.883573  373387 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:43:06.883617  373387 ssh_runner.go:195] Run: cat /version.json
	I0804 00:43:06.883643  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.883619  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:43:06.886476  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.886619  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.886991  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.887037  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:06.887076  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.887105  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:06.887218  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.887295  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:43:06.887451  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.887462  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:43:06.887604  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.887625  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:43:06.887728  373387 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:43:06.887835  373387 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:43:06.994366  373387 ssh_runner.go:195] Run: systemctl --version
	I0804 00:43:07.003506  373387 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:43:07.171152  373387 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:43:07.177856  373387 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:43:07.177940  373387 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:43:07.203036  373387 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:43:07.203063  373387 start.go:495] detecting cgroup driver to use...
	I0804 00:43:07.203161  373387 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:43:07.228619  373387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:43:07.244885  373387 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:43:07.244953  373387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:43:07.260139  373387 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:43:07.274987  373387 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:43:07.396539  373387 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:43:07.533051  373387 docker.go:233] disabling docker service ...
	I0804 00:43:07.533163  373387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:43:07.548899  373387 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:43:07.562355  373387 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:43:07.702769  373387 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:43:07.843535  373387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:43:07.859499  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:43:07.881212  373387 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 00:43:07.881305  373387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:07.893529  373387 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:43:07.893609  373387 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:07.907066  373387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:07.921644  373387 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:07.935244  373387 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:43:07.949124  373387 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:43:07.963272  373387 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:43:07.963352  373387 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:43:07.983014  373387 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:43:07.994588  373387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:43:08.128835  373387 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:43:08.300069  373387 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:43:08.300145  373387 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:43:08.305384  373387 start.go:563] Will wait 60s for crictl version
	I0804 00:43:08.305480  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:08.309530  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:43:08.353307  373387 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:43:08.353420  373387 ssh_runner.go:195] Run: crio --version
	I0804 00:43:08.382953  373387 ssh_runner.go:195] Run: crio --version
	I0804 00:43:08.414472  373387 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 00:43:08.415684  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetIP
	I0804 00:43:08.419262  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:08.419712  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:43:08.419746  373387 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:43:08.420045  373387 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0804 00:43:08.424907  373387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:43:08.441264  373387 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-055939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-055939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.118 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:43:08.441423  373387 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:43:08.441499  373387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:43:08.482428  373387 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:43:08.482518  373387 ssh_runner.go:195] Run: which lz4
	I0804 00:43:08.486765  373387 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 00:43:08.491321  373387 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:43:08.491356  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 00:43:10.257263  373387 crio.go:462] duration metric: took 1.770530027s to copy over tarball
	I0804 00:43:10.257357  373387 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:43:13.036544  373387 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.779149042s)
	I0804 00:43:13.036583  373387 crio.go:469] duration metric: took 2.779280146s to extract the tarball
	I0804 00:43:13.036594  373387 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:43:13.079813  373387 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:43:13.173069  373387 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 00:43:13.173110  373387 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 00:43:13.173189  373387 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:43:13.173202  373387 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:43:13.173215  373387 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 00:43:13.173228  373387 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:43:13.173230  373387 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:43:13.173239  373387 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:43:13.173242  373387 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 00:43:13.173272  373387 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:43:13.174911  373387 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:43:13.174957  373387 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:43:13.175064  373387 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 00:43:13.175266  373387 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:43:13.175788  373387 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:43:13.175808  373387 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:43:13.175891  373387 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 00:43:13.175953  373387 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:43:13.349675  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:43:13.355442  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 00:43:13.355982  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:43:13.362008  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:43:13.387228  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 00:43:13.389341  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 00:43:13.487179  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:43:13.487647  373387 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:43:13.499105  373387 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 00:43:13.499158  373387 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:43:13.499214  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.511369  373387 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 00:43:13.511419  373387 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 00:43:13.511468  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.526577  373387 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 00:43:13.526631  373387 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:43:13.526687  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.550931  373387 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 00:43:13.550980  373387 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 00:43:13.551029  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.551265  373387 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 00:43:13.551347  373387 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:43:13.551414  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.569977  373387 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 00:43:13.570027  373387 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 00:43:13.570089  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.618744  373387 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 00:43:13.618791  373387 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:43:13.618837  373387 ssh_runner.go:195] Run: which crictl
	I0804 00:43:13.694262  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 00:43:13.694288  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 00:43:13.694404  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 00:43:13.694415  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 00:43:13.694459  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 00:43:13.694540  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 00:43:13.694465  373387 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 00:43:13.894373  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 00:43:13.894420  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 00:43:13.896910  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 00:43:13.896910  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 00:43:13.896958  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 00:43:13.897035  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 00:43:13.897116  373387 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 00:43:13.897150  373387 cache_images.go:92] duration metric: took 724.02384ms to LoadCachedImages
	W0804 00:43:13.897242  373387 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19370-323890/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0804 00:43:13.897262  373387 kubeadm.go:934] updating node { 192.168.72.118 8443 v1.20.0 crio true true} ...
	I0804 00:43:13.897391  373387 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-055939 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:43:13.897474  373387 ssh_runner.go:195] Run: crio config
	I0804 00:43:13.947707  373387 cni.go:84] Creating CNI manager for ""
	I0804 00:43:13.947738  373387 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:43:13.947751  373387 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:43:13.947779  373387 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.118 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-055939 NodeName:kubernetes-upgrade-055939 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 00:43:13.947997  373387 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-055939"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:43:13.948128  373387 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 00:43:13.960570  373387 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:43:13.960659  373387 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:43:13.971597  373387 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0804 00:43:13.991531  373387 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:43:14.011944  373387 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0804 00:43:14.031851  373387 ssh_runner.go:195] Run: grep 192.168.72.118	control-plane.minikube.internal$ /etc/hosts
	I0804 00:43:14.035878  373387 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:43:14.049048  373387 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:43:14.189123  373387 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:43:14.210544  373387 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939 for IP: 192.168.72.118
	I0804 00:43:14.210581  373387 certs.go:194] generating shared ca certs ...
	I0804 00:43:14.210603  373387 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.210795  373387 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0804 00:43:14.210851  373387 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0804 00:43:14.210864  373387 certs.go:256] generating profile certs ...
	I0804 00:43:14.210957  373387 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.key
	I0804 00:43:14.210981  373387 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.crt with IP's: []
	I0804 00:43:14.573471  373387 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.crt ...
	I0804 00:43:14.573519  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.crt: {Name:mk4397697a9ca9625da7a56062e1064419353251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.573735  373387 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.key ...
	I0804 00:43:14.573759  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.key: {Name:mk576ad461954765e923f46b1937c0f293cd74b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.573898  373387 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key.1be5cd41
	I0804 00:43:14.573923  373387 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt.1be5cd41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.118]
	I0804 00:43:14.625773  373387 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt.1be5cd41 ...
	I0804 00:43:14.625808  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt.1be5cd41: {Name:mk62b66c84993fe53db31f16eab615feaad70175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.631524  373387 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key.1be5cd41 ...
	I0804 00:43:14.631565  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key.1be5cd41: {Name:mk55e8fcf2e24c424e8c7cd39322005c273c321a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.631739  373387 certs.go:381] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt.1be5cd41 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt
	I0804 00:43:14.631872  373387 certs.go:385] copying /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key.1be5cd41 -> /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key
	I0804 00:43:14.631964  373387 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.key
	I0804 00:43:14.631988  373387 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.crt with IP's: []
	I0804 00:43:14.843217  373387 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.crt ...
	I0804 00:43:14.843258  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.crt: {Name:mk3719e3aaabe0d347869e3e76da22700e3dde84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.843468  373387 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.key ...
	I0804 00:43:14.843491  373387 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.key: {Name:mkd3746d4f3622db889db0f1d5461f12740fe811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:14.843699  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0804 00:43:14.843753  373387 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0804 00:43:14.843768  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:43:14.843801  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0804 00:43:14.843838  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:43:14.843871  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0804 00:43:14.843932  373387 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:43:14.844652  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:43:14.878586  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:43:14.913355  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:43:14.944757  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:43:14.979894  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 00:43:15.016391  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:43:15.048015  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:43:15.090409  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:43:15.129725  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0804 00:43:15.174038  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:43:15.205069  373387 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0804 00:43:15.232562  373387 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:43:15.253589  373387 ssh_runner.go:195] Run: openssl version
	I0804 00:43:15.262201  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0804 00:43:15.278455  373387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0804 00:43:15.285249  373387 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:43:15.285338  373387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0804 00:43:15.294032  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:43:15.309933  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:43:15.321638  373387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:15.326558  373387 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:15.326625  373387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:15.333447  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:43:15.346381  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0804 00:43:15.358539  373387 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0804 00:43:15.364100  373387 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:43:15.364173  373387 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0804 00:43:15.370474  373387 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0804 00:43:15.382856  373387 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:43:15.387810  373387 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:43:15.387872  373387 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-055939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-055939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.118 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:43:15.387947  373387 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:43:15.387997  373387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:43:15.439062  373387 cri.go:89] found id: ""
	I0804 00:43:15.439143  373387 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:43:15.450563  373387 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:43:15.464736  373387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:43:15.476023  373387 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:43:15.476049  373387 kubeadm.go:157] found existing configuration files:
	
	I0804 00:43:15.476104  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:43:15.486612  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:43:15.486682  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:43:15.499370  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:43:15.511328  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:43:15.511403  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:43:15.522459  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:43:15.533309  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:43:15.533394  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:43:15.546605  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:43:15.559148  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:43:15.559223  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:43:15.570309  373387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:43:15.843460  373387 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:45:14.035710  373387 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:45:14.035849  373387 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:45:14.037556  373387 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:45:14.037623  373387 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:45:14.037737  373387 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:45:14.037866  373387 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:45:14.038000  373387 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:45:14.038070  373387 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:45:14.039729  373387 out.go:204]   - Generating certificates and keys ...
	I0804 00:45:14.039801  373387 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:45:14.039853  373387 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:45:14.039914  373387 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:45:14.039994  373387 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:45:14.040057  373387 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:45:14.040105  373387 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:45:14.040148  373387 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:45:14.040311  373387 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	I0804 00:45:14.040395  373387 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:45:14.040554  373387 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	I0804 00:45:14.040611  373387 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:45:14.040661  373387 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:45:14.040696  373387 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:45:14.040739  373387 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:45:14.040779  373387 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:45:14.040832  373387 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:45:14.040909  373387 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:45:14.040961  373387 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:45:14.041058  373387 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:45:14.041144  373387 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:45:14.041191  373387 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:45:14.041283  373387 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:45:14.042806  373387 out.go:204]   - Booting up control plane ...
	I0804 00:45:14.042891  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:45:14.042961  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:45:14.043021  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:45:14.043103  373387 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:45:14.043232  373387 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:45:14.043273  373387 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:45:14.043325  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:14.043473  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:45:14.043526  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:14.043661  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:45:14.043738  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:14.043893  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:45:14.043962  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:14.044125  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:45:14.044204  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:14.044368  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:45:14.044377  373387 kubeadm.go:310] 
	I0804 00:45:14.044409  373387 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:45:14.044446  373387 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:45:14.044452  373387 kubeadm.go:310] 
	I0804 00:45:14.044481  373387 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:45:14.044509  373387 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:45:14.044608  373387 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:45:14.044618  373387 kubeadm.go:310] 
	I0804 00:45:14.044749  373387 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:45:14.044788  373387 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:45:14.044839  373387 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:45:14.044846  373387 kubeadm.go:310] 
	I0804 00:45:14.044989  373387 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:45:14.045208  373387 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:45:14.045234  373387 kubeadm.go:310] 
	I0804 00:45:14.045338  373387 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:45:14.045458  373387 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:45:14.045579  373387 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:45:14.045690  373387 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:45:14.045736  373387 kubeadm.go:310] 
	W0804 00:45:14.045851  373387 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055939 localhost] and IPs [192.168.72.118 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 00:45:14.045906  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 00:45:15.207935  373387 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.161995157s)
	I0804 00:45:15.208043  373387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:45:15.224320  373387 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:45:15.234774  373387 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:45:15.234801  373387 kubeadm.go:157] found existing configuration files:
	
	I0804 00:45:15.234855  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:45:15.246180  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:45:15.246233  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:45:15.257804  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:45:15.268093  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:45:15.268178  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:45:15.278439  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:45:15.288209  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:45:15.288281  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:45:15.298210  373387 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:45:15.307610  373387 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:45:15.307696  373387 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:45:15.317752  373387 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:45:15.397252  373387 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 00:45:15.397352  373387 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:45:15.562347  373387 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:45:15.562498  373387 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:45:15.562634  373387 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:45:15.761142  373387 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:45:15.763574  373387 out.go:204]   - Generating certificates and keys ...
	I0804 00:45:15.763678  373387 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:45:15.763753  373387 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:45:15.763840  373387 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 00:45:15.763891  373387 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 00:45:15.763994  373387 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 00:45:15.764073  373387 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 00:45:15.764153  373387 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 00:45:15.764232  373387 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 00:45:15.764323  373387 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 00:45:15.764444  373387 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 00:45:15.764522  373387 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 00:45:15.764612  373387 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:45:15.967138  373387 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:45:16.240162  373387 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:45:16.313551  373387 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:45:16.561256  373387 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:45:16.585279  373387 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:45:16.585433  373387 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:45:16.585489  373387 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:45:16.731038  373387 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:45:16.732794  373387 out.go:204]   - Booting up control plane ...
	I0804 00:45:16.732942  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:45:16.735976  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:45:16.736977  373387 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:45:16.737787  373387 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:45:16.748776  373387 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 00:45:56.748052  373387 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 00:45:56.748364  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:45:56.748581  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:46:01.748814  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:46:01.749102  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:46:11.749460  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:46:11.749712  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:46:31.750976  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:46:31.751244  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:47:11.752764  373387 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 00:47:11.753036  373387 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 00:47:11.753049  373387 kubeadm.go:310] 
	I0804 00:47:11.753098  373387 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 00:47:11.753146  373387 kubeadm.go:310] 		timed out waiting for the condition
	I0804 00:47:11.753154  373387 kubeadm.go:310] 
	I0804 00:47:11.753199  373387 kubeadm.go:310] 	This error is likely caused by:
	I0804 00:47:11.753268  373387 kubeadm.go:310] 		- The kubelet is not running
	I0804 00:47:11.753446  373387 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 00:47:11.753474  373387 kubeadm.go:310] 
	I0804 00:47:11.753625  373387 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 00:47:11.753670  373387 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 00:47:11.753718  373387 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 00:47:11.753730  373387 kubeadm.go:310] 
	I0804 00:47:11.753867  373387 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 00:47:11.753976  373387 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 00:47:11.753987  373387 kubeadm.go:310] 
	I0804 00:47:11.754134  373387 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 00:47:11.754257  373387 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 00:47:11.754359  373387 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 00:47:11.754446  373387 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 00:47:11.754458  373387 kubeadm.go:310] 
	I0804 00:47:11.754882  373387 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:47:11.754995  373387 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 00:47:11.755093  373387 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 00:47:11.755192  373387 kubeadm.go:394] duration metric: took 3m56.367323873s to StartCluster
	I0804 00:47:11.755259  373387 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:47:11.755321  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:47:11.799261  373387 cri.go:89] found id: ""
	I0804 00:47:11.799292  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.799304  373387 logs.go:278] No container was found matching "kube-apiserver"
	I0804 00:47:11.799312  373387 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:47:11.799440  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:47:11.835283  373387 cri.go:89] found id: ""
	I0804 00:47:11.835313  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.835324  373387 logs.go:278] No container was found matching "etcd"
	I0804 00:47:11.835330  373387 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:47:11.835383  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:47:11.870759  373387 cri.go:89] found id: ""
	I0804 00:47:11.870791  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.870800  373387 logs.go:278] No container was found matching "coredns"
	I0804 00:47:11.870807  373387 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:47:11.870871  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:47:11.910797  373387 cri.go:89] found id: ""
	I0804 00:47:11.910827  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.910838  373387 logs.go:278] No container was found matching "kube-scheduler"
	I0804 00:47:11.910847  373387 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:47:11.910913  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:47:11.945095  373387 cri.go:89] found id: ""
	I0804 00:47:11.945124  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.945134  373387 logs.go:278] No container was found matching "kube-proxy"
	I0804 00:47:11.945142  373387 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:47:11.945213  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:47:11.981482  373387 cri.go:89] found id: ""
	I0804 00:47:11.981541  373387 logs.go:276] 0 containers: []
	W0804 00:47:11.981553  373387 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 00:47:11.981561  373387 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:47:11.981623  373387 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:47:12.016194  373387 cri.go:89] found id: ""
	I0804 00:47:12.016227  373387 logs.go:276] 0 containers: []
	W0804 00:47:12.016239  373387 logs.go:278] No container was found matching "kindnet"
	I0804 00:47:12.016252  373387 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:47:12.016270  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 00:47:12.148748  373387 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 00:47:12.148770  373387 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:47:12.148788  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 00:47:12.248405  373387 logs.go:123] Gathering logs for container status ...
	I0804 00:47:12.248447  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 00:47:12.295512  373387 logs.go:123] Gathering logs for kubelet ...
	I0804 00:47:12.295555  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 00:47:12.348455  373387 logs.go:123] Gathering logs for dmesg ...
	I0804 00:47:12.348493  373387 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0804 00:47:12.362087  373387 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 00:47:12.362131  373387 out.go:239] * 
	* 
	W0804 00:47:12.362187  373387 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:47:12.362208  373387 out.go:239] * 
	* 
	W0804 00:47:12.363111  373387 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 00:47:12.366491  373387 out.go:177] 
	W0804 00:47:12.367825  373387 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 00:47:12.367901  373387 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 00:47:12.367928  373387 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 00:47:12.369220  373387 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-055939
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-055939: (1.449760798s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-055939 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-055939 status --format={{.Host}}: exit status 7 (64.363966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0804 00:47:24.416771  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.493351823s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-055939 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.483412ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055939] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-055939
	    minikube start -p kubernetes-upgrade-055939 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0559392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-055939 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-055939 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.443977836s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-04 00:49:19.055082739 +0000 UTC m=+6386.922241008
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-055939 -n kubernetes-upgrade-055939
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-055939 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-055939 logs -n 25: (2.759478914s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo journalctl                       | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo docker                           | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo                                  | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo cat                              | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo containerd                       | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo systemctl                        | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo find                             | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-675149 sudo crio                             | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-675149                                       | auto-675149   | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC | 04 Aug 24 00:48 UTC |
	| start   | -p calico-675149 --memory=3072                       | calico-675149 | jenkins | v1.33.1 | 04 Aug 24 00:48 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:48:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:48:45.769557  380926 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:48:45.769678  380926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:48:45.769688  380926 out.go:304] Setting ErrFile to fd 2...
	I0804 00:48:45.769694  380926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:48:45.769899  380926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:48:45.770506  380926 out.go:298] Setting JSON to false
	I0804 00:48:45.771634  380926 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34274,"bootTime":1722698252,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:48:45.771722  380926 start.go:139] virtualization: kvm guest
	I0804 00:48:45.773823  380926 out.go:177] * [calico-675149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:48:45.775601  380926 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:48:45.775627  380926 notify.go:220] Checking for updates...
	I0804 00:48:45.778328  380926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:48:45.779768  380926 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:48:45.781120  380926 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:48:45.782478  380926 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:48:45.783832  380926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:48:45.785847  380926 config.go:182] Loaded profile config "cert-expiration-443385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:48:45.786003  380926 config.go:182] Loaded profile config "kindnet-675149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:48:45.786141  380926 config.go:182] Loaded profile config "kubernetes-upgrade-055939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:48:45.786266  380926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:48:45.825681  380926 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:48:45.827140  380926 start.go:297] selected driver: kvm2
	I0804 00:48:45.827161  380926 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:48:45.827172  380926 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:48:45.828070  380926 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:48:45.828187  380926 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:48:45.845541  380926 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:48:45.845606  380926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:48:45.845915  380926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:48:45.845991  380926 cni.go:84] Creating CNI manager for "calico"
	I0804 00:48:45.846004  380926 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0804 00:48:45.846081  380926 start.go:340] cluster config:
	{Name:calico-675149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-675149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:48:45.846208  380926 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:48:45.848025  380926 out.go:177] * Starting "calico-675149" primary control-plane node in "calico-675149" cluster
	I0804 00:48:45.849349  380926 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:48:45.849398  380926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:48:45.849413  380926 cache.go:56] Caching tarball of preloaded images
	I0804 00:48:45.849527  380926 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:48:45.849541  380926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:48:45.849640  380926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/config.json ...
	I0804 00:48:45.849661  380926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/config.json: {Name:mk3eeccb6fe088f8190005b461380f6f0ccf187a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:48:45.849801  380926 start.go:360] acquireMachinesLock for calico-675149: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:48:45.849828  380926 start.go:364] duration metric: took 15.613µs to acquireMachinesLock for "calico-675149"
	I0804 00:48:45.849850  380926 start.go:93] Provisioning new machine with config: &{Name:calico-675149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-675149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:48:45.849917  380926 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:48:45.759334  378776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0804 00:48:45.766219  378776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 00:48:45.766240  378776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0804 00:48:45.789078  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 00:48:46.100641  378776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:48:46.100722  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:46.100750  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-675149 minikube.k8s.io/updated_at=2024_08_04T00_48_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf minikube.k8s.io/name=kindnet-675149 minikube.k8s.io/primary=true
	I0804 00:48:46.202115  378776 ops.go:34] apiserver oom_adj: -16
	I0804 00:48:46.202325  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:46.703396  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:47.202371  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:47.702534  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:48.202406  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:48.703230  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:45.851534  380926 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0804 00:48:45.851731  380926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:45.851776  380926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:45.867450  380926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0804 00:48:45.867940  380926 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:45.868566  380926 main.go:141] libmachine: Using API Version  1
	I0804 00:48:45.868582  380926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:45.869039  380926 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:45.869259  380926 main.go:141] libmachine: (calico-675149) Calling .GetMachineName
	I0804 00:48:45.869447  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:48:45.869643  380926 start.go:159] libmachine.API.Create for "calico-675149" (driver="kvm2")
	I0804 00:48:45.869679  380926 client.go:168] LocalClient.Create starting
	I0804 00:48:45.869728  380926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0804 00:48:45.869775  380926 main.go:141] libmachine: Decoding PEM data...
	I0804 00:48:45.869796  380926 main.go:141] libmachine: Parsing certificate...
	I0804 00:48:45.869881  380926 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0804 00:48:45.869918  380926 main.go:141] libmachine: Decoding PEM data...
	I0804 00:48:45.869947  380926 main.go:141] libmachine: Parsing certificate...
	I0804 00:48:45.869981  380926 main.go:141] libmachine: Running pre-create checks...
	I0804 00:48:45.870001  380926 main.go:141] libmachine: (calico-675149) Calling .PreCreateCheck
	I0804 00:48:45.870357  380926 main.go:141] libmachine: (calico-675149) Calling .GetConfigRaw
	I0804 00:48:45.870830  380926 main.go:141] libmachine: Creating machine...
	I0804 00:48:45.870849  380926 main.go:141] libmachine: (calico-675149) Calling .Create
	I0804 00:48:45.871016  380926 main.go:141] libmachine: (calico-675149) Creating KVM machine...
	I0804 00:48:45.872602  380926 main.go:141] libmachine: (calico-675149) DBG | found existing default KVM network
	I0804 00:48:45.874621  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:45.874419  380949 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfb0}
	I0804 00:48:45.874648  380926 main.go:141] libmachine: (calico-675149) DBG | created network xml: 
	I0804 00:48:45.874662  380926 main.go:141] libmachine: (calico-675149) DBG | <network>
	I0804 00:48:45.874670  380926 main.go:141] libmachine: (calico-675149) DBG |   <name>mk-calico-675149</name>
	I0804 00:48:45.874679  380926 main.go:141] libmachine: (calico-675149) DBG |   <dns enable='no'/>
	I0804 00:48:45.874692  380926 main.go:141] libmachine: (calico-675149) DBG |   
	I0804 00:48:45.874703  380926 main.go:141] libmachine: (calico-675149) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0804 00:48:45.874711  380926 main.go:141] libmachine: (calico-675149) DBG |     <dhcp>
	I0804 00:48:45.874721  380926 main.go:141] libmachine: (calico-675149) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0804 00:48:45.874732  380926 main.go:141] libmachine: (calico-675149) DBG |     </dhcp>
	I0804 00:48:45.874741  380926 main.go:141] libmachine: (calico-675149) DBG |   </ip>
	I0804 00:48:45.874749  380926 main.go:141] libmachine: (calico-675149) DBG |   
	I0804 00:48:45.874761  380926 main.go:141] libmachine: (calico-675149) DBG | </network>
	I0804 00:48:45.874771  380926 main.go:141] libmachine: (calico-675149) DBG | 
	I0804 00:48:45.880378  380926 main.go:141] libmachine: (calico-675149) DBG | trying to create private KVM network mk-calico-675149 192.168.39.0/24...
	I0804 00:48:45.963861  380926 main.go:141] libmachine: (calico-675149) DBG | private KVM network mk-calico-675149 192.168.39.0/24 created
	I0804 00:48:45.963920  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:45.963801  380949 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:48:45.963943  380926 main.go:141] libmachine: (calico-675149) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149 ...
	I0804 00:48:45.963959  380926 main.go:141] libmachine: (calico-675149) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:48:45.963997  380926 main.go:141] libmachine: (calico-675149) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:48:46.242517  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:46.242357  380949 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa...
	I0804 00:48:46.408497  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:46.408340  380949 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/calico-675149.rawdisk...
	I0804 00:48:46.408535  380926 main.go:141] libmachine: (calico-675149) DBG | Writing magic tar header
	I0804 00:48:46.408550  380926 main.go:141] libmachine: (calico-675149) DBG | Writing SSH key tar header
	I0804 00:48:46.408566  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:46.408479  380949 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149 ...
	I0804 00:48:46.408689  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149
	I0804 00:48:46.408722  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149 (perms=drwx------)
	I0804 00:48:46.408730  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0804 00:48:46.408740  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:48:46.408746  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0804 00:48:46.408754  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:48:46.408759  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:48:46.408766  380926 main.go:141] libmachine: (calico-675149) DBG | Checking permissions on dir: /home
	I0804 00:48:46.408771  380926 main.go:141] libmachine: (calico-675149) DBG | Skipping /home - not owner
	I0804 00:48:46.408784  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:48:46.408793  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0804 00:48:46.408816  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0804 00:48:46.408841  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:48:46.408861  380926 main.go:141] libmachine: (calico-675149) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:48:46.408873  380926 main.go:141] libmachine: (calico-675149) Creating domain...
	I0804 00:48:46.410016  380926 main.go:141] libmachine: (calico-675149) define libvirt domain using xml: 
	I0804 00:48:46.410036  380926 main.go:141] libmachine: (calico-675149) <domain type='kvm'>
	I0804 00:48:46.410045  380926 main.go:141] libmachine: (calico-675149)   <name>calico-675149</name>
	I0804 00:48:46.410073  380926 main.go:141] libmachine: (calico-675149)   <memory unit='MiB'>3072</memory>
	I0804 00:48:46.410084  380926 main.go:141] libmachine: (calico-675149)   <vcpu>2</vcpu>
	I0804 00:48:46.410093  380926 main.go:141] libmachine: (calico-675149)   <features>
	I0804 00:48:46.410102  380926 main.go:141] libmachine: (calico-675149)     <acpi/>
	I0804 00:48:46.410113  380926 main.go:141] libmachine: (calico-675149)     <apic/>
	I0804 00:48:46.410123  380926 main.go:141] libmachine: (calico-675149)     <pae/>
	I0804 00:48:46.410133  380926 main.go:141] libmachine: (calico-675149)     
	I0804 00:48:46.410143  380926 main.go:141] libmachine: (calico-675149)   </features>
	I0804 00:48:46.410156  380926 main.go:141] libmachine: (calico-675149)   <cpu mode='host-passthrough'>
	I0804 00:48:46.410168  380926 main.go:141] libmachine: (calico-675149)   
	I0804 00:48:46.410176  380926 main.go:141] libmachine: (calico-675149)   </cpu>
	I0804 00:48:46.410187  380926 main.go:141] libmachine: (calico-675149)   <os>
	I0804 00:48:46.410212  380926 main.go:141] libmachine: (calico-675149)     <type>hvm</type>
	I0804 00:48:46.410244  380926 main.go:141] libmachine: (calico-675149)     <boot dev='cdrom'/>
	I0804 00:48:46.410261  380926 main.go:141] libmachine: (calico-675149)     <boot dev='hd'/>
	I0804 00:48:46.410272  380926 main.go:141] libmachine: (calico-675149)     <bootmenu enable='no'/>
	I0804 00:48:46.410284  380926 main.go:141] libmachine: (calico-675149)   </os>
	I0804 00:48:46.410294  380926 main.go:141] libmachine: (calico-675149)   <devices>
	I0804 00:48:46.410312  380926 main.go:141] libmachine: (calico-675149)     <disk type='file' device='cdrom'>
	I0804 00:48:46.410326  380926 main.go:141] libmachine: (calico-675149)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/boot2docker.iso'/>
	I0804 00:48:46.410338  380926 main.go:141] libmachine: (calico-675149)       <target dev='hdc' bus='scsi'/>
	I0804 00:48:46.410345  380926 main.go:141] libmachine: (calico-675149)       <readonly/>
	I0804 00:48:46.410353  380926 main.go:141] libmachine: (calico-675149)     </disk>
	I0804 00:48:46.410364  380926 main.go:141] libmachine: (calico-675149)     <disk type='file' device='disk'>
	I0804 00:48:46.410397  380926 main.go:141] libmachine: (calico-675149)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:48:46.410424  380926 main.go:141] libmachine: (calico-675149)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/calico-675149.rawdisk'/>
	I0804 00:48:46.410438  380926 main.go:141] libmachine: (calico-675149)       <target dev='hda' bus='virtio'/>
	I0804 00:48:46.410449  380926 main.go:141] libmachine: (calico-675149)     </disk>
	I0804 00:48:46.410458  380926 main.go:141] libmachine: (calico-675149)     <interface type='network'>
	I0804 00:48:46.410469  380926 main.go:141] libmachine: (calico-675149)       <source network='mk-calico-675149'/>
	I0804 00:48:46.410481  380926 main.go:141] libmachine: (calico-675149)       <model type='virtio'/>
	I0804 00:48:46.410489  380926 main.go:141] libmachine: (calico-675149)     </interface>
	I0804 00:48:46.410497  380926 main.go:141] libmachine: (calico-675149)     <interface type='network'>
	I0804 00:48:46.410506  380926 main.go:141] libmachine: (calico-675149)       <source network='default'/>
	I0804 00:48:46.410517  380926 main.go:141] libmachine: (calico-675149)       <model type='virtio'/>
	I0804 00:48:46.410529  380926 main.go:141] libmachine: (calico-675149)     </interface>
	I0804 00:48:46.410546  380926 main.go:141] libmachine: (calico-675149)     <serial type='pty'>
	I0804 00:48:46.410563  380926 main.go:141] libmachine: (calico-675149)       <target port='0'/>
	I0804 00:48:46.410601  380926 main.go:141] libmachine: (calico-675149)     </serial>
	I0804 00:48:46.410613  380926 main.go:141] libmachine: (calico-675149)     <console type='pty'>
	I0804 00:48:46.410622  380926 main.go:141] libmachine: (calico-675149)       <target type='serial' port='0'/>
	I0804 00:48:46.410629  380926 main.go:141] libmachine: (calico-675149)     </console>
	I0804 00:48:46.410634  380926 main.go:141] libmachine: (calico-675149)     <rng model='virtio'>
	I0804 00:48:46.410642  380926 main.go:141] libmachine: (calico-675149)       <backend model='random'>/dev/random</backend>
	I0804 00:48:46.410646  380926 main.go:141] libmachine: (calico-675149)     </rng>
	I0804 00:48:46.410669  380926 main.go:141] libmachine: (calico-675149)     
	I0804 00:48:46.410685  380926 main.go:141] libmachine: (calico-675149)     
	I0804 00:48:46.410698  380926 main.go:141] libmachine: (calico-675149)   </devices>
	I0804 00:48:46.410706  380926 main.go:141] libmachine: (calico-675149) </domain>
	I0804 00:48:46.410718  380926 main.go:141] libmachine: (calico-675149) 
	I0804 00:48:46.415078  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:84:74:10 in network default
	I0804 00:48:46.415746  380926 main.go:141] libmachine: (calico-675149) Ensuring networks are active...
	I0804 00:48:46.415769  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:46.416596  380926 main.go:141] libmachine: (calico-675149) Ensuring network default is active
	I0804 00:48:46.416959  380926 main.go:141] libmachine: (calico-675149) Ensuring network mk-calico-675149 is active
	I0804 00:48:46.417533  380926 main.go:141] libmachine: (calico-675149) Getting domain xml...
	I0804 00:48:46.418368  380926 main.go:141] libmachine: (calico-675149) Creating domain...
	I0804 00:48:47.697297  380926 main.go:141] libmachine: (calico-675149) Waiting to get IP...
	I0804 00:48:47.698299  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:47.698794  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:47.698820  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:47.698765  380949 retry.go:31] will retry after 268.114744ms: waiting for machine to come up
	I0804 00:48:47.968293  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:47.968980  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:47.969011  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:47.968942  380949 retry.go:31] will retry after 353.760938ms: waiting for machine to come up
	I0804 00:48:48.324379  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:48.324971  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:48.325005  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:48.324916  380949 retry.go:31] will retry after 415.517893ms: waiting for machine to come up
	I0804 00:48:48.742475  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:48.743061  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:48.743098  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:48.743015  380949 retry.go:31] will retry after 481.79097ms: waiting for machine to come up
	I0804 00:48:49.226435  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:49.227013  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:49.227047  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:49.226950  380949 retry.go:31] will retry after 682.539618ms: waiting for machine to come up
	I0804 00:48:49.910667  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:49.911186  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:49.911215  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:49.911132  380949 retry.go:31] will retry after 652.921594ms: waiting for machine to come up
	I0804 00:48:50.566096  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:50.566862  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:50.566892  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:50.566789  380949 retry.go:31] will retry after 982.049937ms: waiting for machine to come up
	I0804 00:48:50.584879  379400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447 f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2 557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53 a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30 944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f ad27b346dc8a0a2515319f88fb3a6545ef5d9d07cf15956c24cb595d9a4425a0 0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0 f0278f149c1497d384ff74a703994929a9b78ce23154b9d76e9da397a67fbed7 524c77908293b22edf65f964db05c276a09b11797a0037b7dfe3766b49b81725: (9.274887006s)
	I0804 00:48:50.584977  379400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 00:48:50.633692  379400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:48:50.646684  379400 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  4 00:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug  4 00:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Aug  4 00:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Aug  4 00:48 /etc/kubernetes/scheduler.conf
	
	I0804 00:48:50.646773  379400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:48:50.657375  379400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:48:50.667757  379400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:48:50.678745  379400 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:48:50.678830  379400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:48:50.689365  379400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:48:50.699739  379400 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:48:50.699810  379400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:48:50.712724  379400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:48:50.724662  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:48:50.781948  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:48:51.964136  379400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.182141757s)
	I0804 00:48:51.964185  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:48:52.222373  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:48:52.304626  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:48:52.404538  379400 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:48:52.404636  379400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:48:52.442051  379400 api_server.go:72] duration metric: took 37.511391ms to wait for apiserver process to appear ...
	I0804 00:48:52.442086  379400 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:48:52.442109  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:48:49.203267  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:49.702514  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:50.202444  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:50.702550  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:51.202711  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:51.702356  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:52.202804  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:52.703230  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:53.202778  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:53.702939  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:51.550213  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:51.550765  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:51.550795  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:51.550711  380949 retry.go:31] will retry after 894.77252ms: waiting for machine to come up
	I0804 00:48:52.446594  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:52.447175  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:52.447209  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:52.447117  380949 retry.go:31] will retry after 1.828703361s: waiting for machine to come up
	I0804 00:48:54.276950  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:54.277389  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:54.277422  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:54.277316  380949 retry.go:31] will retry after 1.419816539s: waiting for machine to come up
	I0804 00:48:55.698639  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:55.699157  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:55.699192  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:55.699072  380949 retry.go:31] will retry after 2.846567152s: waiting for machine to come up
	I0804 00:48:57.443234  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 00:48:57.443305  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:48:54.203216  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:54.702399  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:55.203223  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:55.703356  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:56.203371  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:56.703309  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:57.203393  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:57.702810  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:58.202413  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:58.702747  378776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:48:58.803211  378776 kubeadm.go:1113] duration metric: took 12.702554813s to wait for elevateKubeSystemPrivileges
	I0804 00:48:58.803258  378776 kubeadm.go:394] duration metric: took 24.441855463s to StartCluster
	I0804 00:48:58.803283  378776 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:48:58.803397  378776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:48:58.804744  378776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:48:58.805035  378776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:48:58.805048  378776 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.206 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:48:58.805122  378776 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:48:58.805216  378776 addons.go:69] Setting storage-provisioner=true in profile "kindnet-675149"
	I0804 00:48:58.805234  378776 addons.go:69] Setting default-storageclass=true in profile "kindnet-675149"
	I0804 00:48:58.805256  378776 addons.go:234] Setting addon storage-provisioner=true in "kindnet-675149"
	I0804 00:48:58.805277  378776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-675149"
	I0804 00:48:58.805285  378776 config.go:182] Loaded profile config "kindnet-675149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:48:58.805297  378776 host.go:66] Checking if "kindnet-675149" exists ...
	I0804 00:48:58.805682  378776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:58.805704  378776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:58.805720  378776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:58.805728  378776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:58.806601  378776 out.go:177] * Verifying Kubernetes components...
	I0804 00:48:58.808027  378776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:48:58.822996  378776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0804 00:48:58.823143  378776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0804 00:48:58.823516  378776 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:58.823823  378776 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:58.824110  378776 main.go:141] libmachine: Using API Version  1
	I0804 00:48:58.824128  378776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:58.824467  378776 main.go:141] libmachine: Using API Version  1
	I0804 00:48:58.824483  378776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:58.824516  378776 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:58.825098  378776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:58.825142  378776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:58.825489  378776 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:58.825689  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetState
	I0804 00:48:58.829241  378776 addons.go:234] Setting addon default-storageclass=true in "kindnet-675149"
	I0804 00:48:58.829290  378776 host.go:66] Checking if "kindnet-675149" exists ...
	I0804 00:48:58.829634  378776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:58.829694  378776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:58.842948  378776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38651
	I0804 00:48:58.843499  378776 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:58.844118  378776 main.go:141] libmachine: Using API Version  1
	I0804 00:48:58.844147  378776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:58.844515  378776 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:58.844712  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetState
	I0804 00:48:58.847575  378776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0804 00:48:58.847593  378776 main.go:141] libmachine: (kindnet-675149) Calling .DriverName
	I0804 00:48:58.848129  378776 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:58.848706  378776 main.go:141] libmachine: Using API Version  1
	I0804 00:48:58.848730  378776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:58.849143  378776 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:58.849631  378776 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:48:58.849993  378776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:48:58.850044  378776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:48:58.851666  378776 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:48:58.851689  378776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:48:58.851710  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHHostname
	I0804 00:48:58.855856  378776 main.go:141] libmachine: (kindnet-675149) DBG | domain kindnet-675149 has defined MAC address 52:54:00:b0:fd:f3 in network mk-kindnet-675149
	I0804 00:48:58.856383  378776 main.go:141] libmachine: (kindnet-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:fd:f3", ip: ""} in network mk-kindnet-675149: {Iface:virbr1 ExpiryTime:2024-08-04 01:48:17 +0000 UTC Type:0 Mac:52:54:00:b0:fd:f3 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:kindnet-675149 Clientid:01:52:54:00:b0:fd:f3}
	I0804 00:48:58.856410  378776 main.go:141] libmachine: (kindnet-675149) DBG | domain kindnet-675149 has defined IP address 192.168.61.206 and MAC address 52:54:00:b0:fd:f3 in network mk-kindnet-675149
	I0804 00:48:58.856755  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHPort
	I0804 00:48:58.856955  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHKeyPath
	I0804 00:48:58.857093  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHUsername
	I0804 00:48:58.857231  378776 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kindnet-675149/id_rsa Username:docker}
	I0804 00:48:58.869493  378776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0804 00:48:58.870060  378776 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:48:58.870560  378776 main.go:141] libmachine: Using API Version  1
	I0804 00:48:58.870587  378776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:48:58.870946  378776 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:48:58.871117  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetState
	I0804 00:48:58.872889  378776 main.go:141] libmachine: (kindnet-675149) Calling .DriverName
	I0804 00:48:58.873101  378776 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:48:58.873115  378776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:48:58.873129  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHHostname
	I0804 00:48:58.875869  378776 main.go:141] libmachine: (kindnet-675149) DBG | domain kindnet-675149 has defined MAC address 52:54:00:b0:fd:f3 in network mk-kindnet-675149
	I0804 00:48:58.876237  378776 main.go:141] libmachine: (kindnet-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:fd:f3", ip: ""} in network mk-kindnet-675149: {Iface:virbr1 ExpiryTime:2024-08-04 01:48:17 +0000 UTC Type:0 Mac:52:54:00:b0:fd:f3 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:kindnet-675149 Clientid:01:52:54:00:b0:fd:f3}
	I0804 00:48:58.876253  378776 main.go:141] libmachine: (kindnet-675149) DBG | domain kindnet-675149 has defined IP address 192.168.61.206 and MAC address 52:54:00:b0:fd:f3 in network mk-kindnet-675149
	I0804 00:48:58.876462  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHPort
	I0804 00:48:58.876615  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHKeyPath
	I0804 00:48:58.876781  378776 main.go:141] libmachine: (kindnet-675149) Calling .GetSSHUsername
	I0804 00:48:58.876899  378776 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kindnet-675149/id_rsa Username:docker}
	I0804 00:48:59.122289  378776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 00:48:59.122332  378776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:48:59.218968  378776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:48:59.243686  378776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:48:59.742923  378776 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0804 00:48:59.743900  378776 node_ready.go:35] waiting up to 15m0s for node "kindnet-675149" to be "Ready" ...
	I0804 00:49:00.099373  378776 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:00.099405  378776 main.go:141] libmachine: (kindnet-675149) Calling .Close
	I0804 00:49:00.099445  378776 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:00.099467  378776 main.go:141] libmachine: (kindnet-675149) Calling .Close
	I0804 00:49:00.099712  378776 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:00.099727  378776 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:00.099736  378776 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:00.099743  378776 main.go:141] libmachine: (kindnet-675149) Calling .Close
	I0804 00:49:00.099860  378776 main.go:141] libmachine: (kindnet-675149) DBG | Closing plugin on server side
	I0804 00:49:00.099919  378776 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:00.099928  378776 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:00.099937  378776 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:00.099944  378776 main.go:141] libmachine: (kindnet-675149) Calling .Close
	I0804 00:49:00.100070  378776 main.go:141] libmachine: (kindnet-675149) DBG | Closing plugin on server side
	I0804 00:49:00.100098  378776 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:00.100109  378776 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:00.100297  378776 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:00.100320  378776 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:00.108911  378776 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:00.108934  378776 main.go:141] libmachine: (kindnet-675149) Calling .Close
	I0804 00:49:00.109247  378776 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:00.109271  378776 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:00.111800  378776 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 00:48:58.546962  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:48:58.547490  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:48:58.547522  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:48:58.547440  380949 retry.go:31] will retry after 3.097159652s: waiting for machine to come up
	I0804 00:49:02.443799  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 00:49:02.443845  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:00.112940  378776 addons.go:510] duration metric: took 1.307818883s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 00:49:00.246711  378776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-675149" context rescaled to 1 replicas
	I0804 00:49:01.748467  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:01.645988  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:01.646436  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:49:01.646469  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:49:01.646382  380949 retry.go:31] will retry after 4.116771589s: waiting for machine to come up
	I0804 00:49:05.766534  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:05.766898  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find current IP address of domain calico-675149 in network mk-calico-675149
	I0804 00:49:05.766927  380926 main.go:141] libmachine: (calico-675149) DBG | I0804 00:49:05.766885  380949 retry.go:31] will retry after 5.20549463s: waiting for machine to come up
	I0804 00:49:07.444370  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 00:49:07.444424  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:04.247400  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:06.248406  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:08.248568  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:10.595984  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": read tcp 192.168.72.1:50584->192.168.72.118:8443: read: connection reset by peer
	I0804 00:49:10.596044  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:10.596678  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": dial tcp 192.168.72.118:8443: connect: connection refused
	I0804 00:49:10.942208  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:10.942973  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": dial tcp 192.168.72.118:8443: connect: connection refused
	I0804 00:49:11.442673  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:11.443336  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": dial tcp 192.168.72.118:8443: connect: connection refused
	I0804 00:49:11.942652  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:11.943277  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": dial tcp 192.168.72.118:8443: connect: connection refused
	I0804 00:49:12.442242  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:12.442959  379400 api_server.go:269] stopped: https://192.168.72.118:8443/healthz: Get "https://192.168.72.118:8443/healthz": dial tcp 192.168.72.118:8443: connect: connection refused
	I0804 00:49:12.942661  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:10.746932  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:12.748088  378776 node_ready.go:53] node "kindnet-675149" has status "Ready":"False"
	I0804 00:49:13.248023  378776 node_ready.go:49] node "kindnet-675149" has status "Ready":"True"
	I0804 00:49:13.248066  378776 node_ready.go:38] duration metric: took 13.504128599s for node "kindnet-675149" to be "Ready" ...
	I0804 00:49:13.248077  378776 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:49:13.257615  378776 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-rs72h" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:10.974413  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:10.974954  380926 main.go:141] libmachine: (calico-675149) Found IP for machine: 192.168.39.54
	I0804 00:49:10.974982  380926 main.go:141] libmachine: (calico-675149) Reserving static IP address...
	I0804 00:49:10.974992  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has current primary IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:10.975346  380926 main.go:141] libmachine: (calico-675149) DBG | unable to find host DHCP lease matching {name: "calico-675149", mac: "52:54:00:06:ac:40", ip: "192.168.39.54"} in network mk-calico-675149
	I0804 00:49:11.059148  380926 main.go:141] libmachine: (calico-675149) DBG | Getting to WaitForSSH function...
	I0804 00:49:11.059182  380926 main.go:141] libmachine: (calico-675149) Reserved static IP address: 192.168.39.54
	I0804 00:49:11.059195  380926 main.go:141] libmachine: (calico-675149) Waiting for SSH to be available...
	I0804 00:49:11.062160  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.062569  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.062592  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.062740  380926 main.go:141] libmachine: (calico-675149) DBG | Using SSH client type: external
	I0804 00:49:11.062768  380926 main.go:141] libmachine: (calico-675149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa (-rw-------)
	I0804 00:49:11.062801  380926 main.go:141] libmachine: (calico-675149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:49:11.062814  380926 main.go:141] libmachine: (calico-675149) DBG | About to run SSH command:
	I0804 00:49:11.062829  380926 main.go:141] libmachine: (calico-675149) DBG | exit 0
	I0804 00:49:11.185740  380926 main.go:141] libmachine: (calico-675149) DBG | SSH cmd err, output: <nil>: 
	I0804 00:49:11.186083  380926 main.go:141] libmachine: (calico-675149) KVM machine creation complete!
	I0804 00:49:11.186590  380926 main.go:141] libmachine: (calico-675149) Calling .GetConfigRaw
	I0804 00:49:11.187180  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:11.187437  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:11.187655  380926 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:49:11.187715  380926 main.go:141] libmachine: (calico-675149) Calling .GetState
	I0804 00:49:11.189118  380926 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:49:11.189142  380926 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:49:11.189148  380926 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:49:11.189154  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:11.191743  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.192103  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.192127  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.192294  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:11.192475  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.192667  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.192849  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:11.193058  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:11.193271  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:11.193284  380926 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:49:11.293085  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:49:11.293117  380926 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:49:11.293128  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:11.296226  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.296618  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.296651  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.296869  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:11.297083  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.297262  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.297468  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:11.297666  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:11.297845  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:11.297857  380926 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:49:11.398431  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:49:11.398536  380926 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:49:11.398548  380926 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:49:11.398556  380926 main.go:141] libmachine: (calico-675149) Calling .GetMachineName
	I0804 00:49:11.398861  380926 buildroot.go:166] provisioning hostname "calico-675149"
	I0804 00:49:11.398892  380926 main.go:141] libmachine: (calico-675149) Calling .GetMachineName
	I0804 00:49:11.399098  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:11.402093  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.402515  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.402548  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.402734  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:11.402907  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.403099  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.403267  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:11.403456  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:11.403689  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:11.403702  380926 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-675149 && echo "calico-675149" | sudo tee /etc/hostname
	I0804 00:49:11.520704  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-675149
	
	I0804 00:49:11.520739  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:11.523725  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.524131  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.524162  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.524310  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:11.524556  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.524772  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:11.524973  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:11.525174  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:11.525368  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:11.525390  380926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-675149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-675149/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-675149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:49:11.634651  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:49:11.634715  380926 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0804 00:49:11.634752  380926 buildroot.go:174] setting up certificates
	I0804 00:49:11.634763  380926 provision.go:84] configureAuth start
	I0804 00:49:11.634779  380926 main.go:141] libmachine: (calico-675149) Calling .GetMachineName
	I0804 00:49:11.635125  380926 main.go:141] libmachine: (calico-675149) Calling .GetIP
	I0804 00:49:11.638094  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.638587  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.638636  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.638810  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:11.641220  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.641595  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:11.641639  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:11.641781  380926 provision.go:143] copyHostCerts
	I0804 00:49:11.641834  380926 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0804 00:49:11.641844  380926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:49:11.641909  380926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0804 00:49:11.642016  380926 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0804 00:49:11.642026  380926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:49:11.642061  380926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0804 00:49:11.642128  380926 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0804 00:49:11.642138  380926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:49:11.642174  380926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0804 00:49:11.642246  380926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.calico-675149 san=[127.0.0.1 192.168.39.54 calico-675149 localhost minikube]
	I0804 00:49:12.064033  380926 provision.go:177] copyRemoteCerts
	I0804 00:49:12.064115  380926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:49:12.064151  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.067349  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.067764  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.067799  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.068019  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.068240  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.068413  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.068579  380926 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa Username:docker}
	I0804 00:49:12.152332  380926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0804 00:49:12.181842  380926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:49:12.208837  380926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:49:12.241498  380926 provision.go:87] duration metric: took 606.718056ms to configureAuth
	I0804 00:49:12.241551  380926 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:49:12.241718  380926 config.go:182] Loaded profile config "calico-675149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:49:12.241810  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.244712  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.245177  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.245204  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.245373  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.245594  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.245809  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.245979  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.246165  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:12.246372  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:12.246390  380926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:49:12.529849  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:49:12.529879  380926 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:49:12.529887  380926 main.go:141] libmachine: (calico-675149) Calling .GetURL
	I0804 00:49:12.531274  380926 main.go:141] libmachine: (calico-675149) DBG | Using libvirt version 6000000
	I0804 00:49:12.533814  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.534200  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.534245  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.534408  380926 main.go:141] libmachine: Docker is up and running!
	I0804 00:49:12.534426  380926 main.go:141] libmachine: Reticulating splines...
	I0804 00:49:12.534436  380926 client.go:171] duration metric: took 26.664747615s to LocalClient.Create
	I0804 00:49:12.534469  380926 start.go:167] duration metric: took 26.664828068s to libmachine.API.Create "calico-675149"
	I0804 00:49:12.534482  380926 start.go:293] postStartSetup for "calico-675149" (driver="kvm2")
	I0804 00:49:12.534503  380926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:49:12.534535  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:12.534821  380926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:49:12.534846  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.537114  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.537491  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.537548  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.537727  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.537957  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.538128  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.538314  380926 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa Username:docker}
	I0804 00:49:12.625288  380926 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:49:12.630017  380926 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:49:12.630045  380926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0804 00:49:12.630138  380926 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0804 00:49:12.630241  380926 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0804 00:49:12.630384  380926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:49:12.642106  380926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:49:12.667528  380926 start.go:296] duration metric: took 133.024008ms for postStartSetup
	I0804 00:49:12.667608  380926 main.go:141] libmachine: (calico-675149) Calling .GetConfigRaw
	I0804 00:49:12.668313  380926 main.go:141] libmachine: (calico-675149) Calling .GetIP
	I0804 00:49:12.671694  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.672205  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.672257  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.672573  380926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/config.json ...
	I0804 00:49:12.672870  380926 start.go:128] duration metric: took 26.82293656s to createHost
	I0804 00:49:12.672907  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.675459  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.675924  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.675954  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.676148  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.676365  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.676617  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.676811  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.677045  380926 main.go:141] libmachine: Using SSH client type: native
	I0804 00:49:12.677282  380926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.54 22 <nil> <nil>}
	I0804 00:49:12.677305  380926 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 00:49:12.791097  380926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722732552.746578445
	
	I0804 00:49:12.791127  380926 fix.go:216] guest clock: 1722732552.746578445
	I0804 00:49:12.791137  380926 fix.go:229] Guest: 2024-08-04 00:49:12.746578445 +0000 UTC Remote: 2024-08-04 00:49:12.672889368 +0000 UTC m=+26.943494114 (delta=73.689077ms)
	I0804 00:49:12.791164  380926 fix.go:200] guest clock delta is within tolerance: 73.689077ms
	I0804 00:49:12.791174  380926 start.go:83] releasing machines lock for "calico-675149", held for 26.941336653s
	I0804 00:49:12.791200  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:12.791517  380926 main.go:141] libmachine: (calico-675149) Calling .GetIP
	I0804 00:49:12.794624  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.795099  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.795131  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.795307  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:12.796024  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:12.796270  380926 main.go:141] libmachine: (calico-675149) Calling .DriverName
	I0804 00:49:12.796383  380926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:49:12.796431  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.796614  380926 ssh_runner.go:195] Run: cat /version.json
	I0804 00:49:12.796654  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHHostname
	I0804 00:49:12.799712  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.800023  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.800167  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.800203  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.800457  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:12.800480  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:12.800507  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.800679  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHPort
	I0804 00:49:12.800736  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.800878  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.800879  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHKeyPath
	I0804 00:49:12.801073  380926 main.go:141] libmachine: (calico-675149) Calling .GetSSHUsername
	I0804 00:49:12.801076  380926 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa Username:docker}
	I0804 00:49:12.801207  380926 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/calico-675149/id_rsa Username:docker}
	I0804 00:49:12.883123  380926 ssh_runner.go:195] Run: systemctl --version
	I0804 00:49:12.904873  380926 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:49:13.073831  380926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:49:13.081894  380926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:49:13.081962  380926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:49:13.104634  380926 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:49:13.104727  380926 start.go:495] detecting cgroup driver to use...
	I0804 00:49:13.104816  380926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:49:13.129336  380926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:49:13.144512  380926 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:49:13.144594  380926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:49:13.165217  380926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:49:13.183347  380926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:49:13.321197  380926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:49:13.526439  380926 docker.go:233] disabling docker service ...
	I0804 00:49:13.526525  380926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:49:13.547393  380926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:49:13.567549  380926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:49:13.760747  380926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:49:13.953719  380926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:49:13.969257  380926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:49:13.988629  380926 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:49:13.988708  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.000372  380926 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:49:14.000466  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.013060  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.025764  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.036852  380926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:49:14.051560  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.063403  380926 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.085465  380926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:49:14.097355  380926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:49:14.108381  380926 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:49:14.108463  380926 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:49:14.126838  380926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:49:14.147627  380926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:49:14.289684  380926 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:49:14.472324  380926 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:49:14.472404  380926 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:49:14.478393  380926 start.go:563] Will wait 60s for crictl version
	I0804 00:49:14.478473  380926 ssh_runner.go:195] Run: which crictl
	I0804 00:49:14.482401  380926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:49:14.530510  380926 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:49:14.530607  380926 ssh_runner.go:195] Run: crio --version
	I0804 00:49:14.564690  380926 ssh_runner.go:195] Run: crio --version
	I0804 00:49:14.597125  380926 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:49:14.598290  380926 main.go:141] libmachine: (calico-675149) Calling .GetIP
	I0804 00:49:14.601008  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:14.601408  380926 main.go:141] libmachine: (calico-675149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ac:40", ip: ""} in network mk-calico-675149: {Iface:virbr3 ExpiryTime:2024-08-04 01:49:01 +0000 UTC Type:0 Mac:52:54:00:06:ac:40 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:calico-675149 Clientid:01:52:54:00:06:ac:40}
	I0804 00:49:14.601439  380926 main.go:141] libmachine: (calico-675149) DBG | domain calico-675149 has defined IP address 192.168.39.54 and MAC address 52:54:00:06:ac:40 in network mk-calico-675149
	I0804 00:49:14.601695  380926 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:49:14.606370  380926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:49:14.619469  380926 kubeadm.go:883] updating cluster {Name:calico-675149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:calico-675149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:49:14.619601  380926 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:49:14.619658  380926 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:49:14.653734  380926 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:49:14.653807  380926 ssh_runner.go:195] Run: which lz4
	I0804 00:49:14.657928  380926 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 00:49:14.662250  380926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:49:14.662292  380926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:49:14.765897  378776 pod_ready.go:92] pod "coredns-7db6d8ff4d-rs72h" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:14.765933  378776 pod_ready.go:81] duration metric: took 1.508282622s for pod "coredns-7db6d8ff4d-rs72h" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.765946  378776 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.772306  378776 pod_ready.go:92] pod "etcd-kindnet-675149" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:14.772343  378776 pod_ready.go:81] duration metric: took 6.388153ms for pod "etcd-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.772361  378776 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.778513  378776 pod_ready.go:92] pod "kube-apiserver-kindnet-675149" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:14.778544  378776 pod_ready.go:81] duration metric: took 6.174674ms for pod "kube-apiserver-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.778556  378776 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.784966  378776 pod_ready.go:92] pod "kube-controller-manager-kindnet-675149" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:14.785061  378776 pod_ready.go:81] duration metric: took 6.495441ms for pod "kube-controller-manager-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.785079  378776 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-cw7m6" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.848590  378776 pod_ready.go:92] pod "kube-proxy-cw7m6" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:14.848617  378776 pod_ready.go:81] duration metric: took 63.528842ms for pod "kube-proxy-cw7m6" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:14.848631  378776 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:15.248109  378776 pod_ready.go:92] pod "kube-scheduler-kindnet-675149" in "kube-system" namespace has status "Ready":"True"
	I0804 00:49:15.248136  378776 pod_ready.go:81] duration metric: took 399.497488ms for pod "kube-scheduler-kindnet-675149" in "kube-system" namespace to be "Ready" ...
	I0804 00:49:15.248147  378776 pod_ready.go:38] duration metric: took 2.000058335s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:49:15.248166  378776 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:49:15.248234  378776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:49:15.272047  378776 api_server.go:72] duration metric: took 16.466956482s to wait for apiserver process to appear ...
	I0804 00:49:15.272090  378776 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:49:15.272115  378776 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8443/healthz ...
	I0804 00:49:15.279790  378776 api_server.go:279] https://192.168.61.206:8443/healthz returned 200:
	ok
	I0804 00:49:15.282477  378776 api_server.go:141] control plane version: v1.30.3
	I0804 00:49:15.282510  378776 api_server.go:131] duration metric: took 10.409715ms to wait for apiserver health ...
	I0804 00:49:15.282523  378776 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:49:15.453755  378776 system_pods.go:59] 8 kube-system pods found
	I0804 00:49:15.453796  378776 system_pods.go:61] "coredns-7db6d8ff4d-rs72h" [12bb1458-188c-4b99-bb49-1cf53f14c553] Running
	I0804 00:49:15.453803  378776 system_pods.go:61] "etcd-kindnet-675149" [8c60c0aa-f0d4-4c50-bc19-1842956dc880] Running
	I0804 00:49:15.453809  378776 system_pods.go:61] "kindnet-7nrxt" [e813677a-69ce-470b-b243-f2bb4314b4ba] Running
	I0804 00:49:15.453820  378776 system_pods.go:61] "kube-apiserver-kindnet-675149" [460bd24b-add6-44a7-a103-7904f94b28d1] Running
	I0804 00:49:15.453826  378776 system_pods.go:61] "kube-controller-manager-kindnet-675149" [0836cd00-fc9d-4346-b6bc-847a84953620] Running
	I0804 00:49:15.453832  378776 system_pods.go:61] "kube-proxy-cw7m6" [b6ce0c66-4c60-415e-a2d0-6035c2092e55] Running
	I0804 00:49:15.453843  378776 system_pods.go:61] "kube-scheduler-kindnet-675149" [faff1ee2-5406-4998-b564-ac8cbe0684c8] Running
	I0804 00:49:15.453848  378776 system_pods.go:61] "storage-provisioner" [17ee5040-8905-40cf-a4a6-08c7a2fbcd7c] Running
	I0804 00:49:15.453855  378776 system_pods.go:74] duration metric: took 171.325687ms to wait for pod list to return data ...
	I0804 00:49:15.453866  378776 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:49:15.648418  378776 default_sa.go:45] found service account: "default"
	I0804 00:49:15.648455  378776 default_sa.go:55] duration metric: took 194.57611ms for default service account to be created ...
	I0804 00:49:15.648467  378776 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:49:15.856251  378776 system_pods.go:86] 8 kube-system pods found
	I0804 00:49:15.856284  378776 system_pods.go:89] "coredns-7db6d8ff4d-rs72h" [12bb1458-188c-4b99-bb49-1cf53f14c553] Running
	I0804 00:49:15.856290  378776 system_pods.go:89] "etcd-kindnet-675149" [8c60c0aa-f0d4-4c50-bc19-1842956dc880] Running
	I0804 00:49:15.856294  378776 system_pods.go:89] "kindnet-7nrxt" [e813677a-69ce-470b-b243-f2bb4314b4ba] Running
	I0804 00:49:15.856298  378776 system_pods.go:89] "kube-apiserver-kindnet-675149" [460bd24b-add6-44a7-a103-7904f94b28d1] Running
	I0804 00:49:15.856302  378776 system_pods.go:89] "kube-controller-manager-kindnet-675149" [0836cd00-fc9d-4346-b6bc-847a84953620] Running
	I0804 00:49:15.856307  378776 system_pods.go:89] "kube-proxy-cw7m6" [b6ce0c66-4c60-415e-a2d0-6035c2092e55] Running
	I0804 00:49:15.856311  378776 system_pods.go:89] "kube-scheduler-kindnet-675149" [faff1ee2-5406-4998-b564-ac8cbe0684c8] Running
	I0804 00:49:15.856314  378776 system_pods.go:89] "storage-provisioner" [17ee5040-8905-40cf-a4a6-08c7a2fbcd7c] Running
	I0804 00:49:15.856322  378776 system_pods.go:126] duration metric: took 207.847974ms to wait for k8s-apps to be running ...
	I0804 00:49:15.856329  378776 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:49:15.856378  378776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:49:15.878358  378776 system_svc.go:56] duration metric: took 22.013291ms WaitForService to wait for kubelet
	I0804 00:49:15.878400  378776 kubeadm.go:582] duration metric: took 17.07331442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:49:15.878434  378776 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:49:16.049808  378776 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:49:16.049850  378776 node_conditions.go:123] node cpu capacity is 2
	I0804 00:49:16.049870  378776 node_conditions.go:105] duration metric: took 171.429395ms to run NodePressure ...
	I0804 00:49:16.049889  378776 start.go:241] waiting for startup goroutines ...
	I0804 00:49:16.049898  378776 start.go:246] waiting for cluster config update ...
	I0804 00:49:16.049921  378776 start.go:255] writing updated cluster config ...
	I0804 00:49:16.050361  378776 ssh_runner.go:195] Run: rm -f paused
	I0804 00:49:16.137058  378776 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:49:16.139611  378776 out.go:177] * Done! kubectl is now configured to use "kindnet-675149" cluster and "default" namespace by default
	I0804 00:49:15.026826  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:49:15.026881  379400 api_server.go:103] status: https://192.168.72.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:49:15.026900  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:15.041981  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 00:49:15.042027  379400 api_server.go:103] status: https://192.168.72.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 00:49:15.442256  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:15.450897  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:49:15.450931  379400 api_server.go:103] status: https://192.168.72.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:49:15.942469  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:15.947959  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:49:15.947999  379400 api_server.go:103] status: https://192.168.72.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:49:16.442718  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:16.452631  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 00:49:16.452672  379400 api_server.go:103] status: https://192.168.72.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 00:49:16.943201  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:16.948030  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 200:
	ok
	I0804 00:49:16.955259  379400 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:49:16.955292  379400 api_server.go:131] duration metric: took 24.513197637s to wait for apiserver health ...
	I0804 00:49:16.955304  379400 cni.go:84] Creating CNI manager for ""
	I0804 00:49:16.955313  379400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:49:16.957119  379400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:49:16.958419  379400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:49:16.970954  379400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:49:16.990830  379400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:49:17.005834  379400 system_pods.go:59] 8 kube-system pods found
	I0804 00:49:17.005866  379400 system_pods.go:61] "coredns-6f6b679f8f-6g28k" [b699fd11-060a-4414-ab72-173da96ef7ba] Running
	I0804 00:49:17.005871  379400 system_pods.go:61] "coredns-6f6b679f8f-rl7fx" [151ab159-6009-4fae-aec4-f0152acec8e8] Running
	I0804 00:49:17.005878  379400 system_pods.go:61] "etcd-kubernetes-upgrade-055939" [d2fb1bff-d113-42e9-929e-e93f3ce17e4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:49:17.005885  379400 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-055939" [7b41b4b6-dbe4-4fb4-982f-b1160cb2e915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:49:17.005893  379400 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-055939" [5fcbe42b-ff94-4eac-b5a9-176e6cb169db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:49:17.005899  379400 system_pods.go:61] "kube-proxy-xqhlp" [e269f591-e813-49e3-86b3-50a80998c9af] Running
	I0804 00:49:17.005904  379400 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-055939" [7289692b-704e-43c1-8d30-ecb7603185b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:49:17.005907  379400 system_pods.go:61] "storage-provisioner" [6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6] Running
	I0804 00:49:17.005914  379400 system_pods.go:74] duration metric: took 15.057975ms to wait for pod list to return data ...
	I0804 00:49:17.005922  379400 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:49:17.010500  379400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:49:17.010530  379400 node_conditions.go:123] node cpu capacity is 2
	I0804 00:49:17.010542  379400 node_conditions.go:105] duration metric: took 4.615308ms to run NodePressure ...
	I0804 00:49:17.010573  379400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 00:49:17.391358  379400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:49:17.407931  379400 ops.go:34] apiserver oom_adj: -16
	I0804 00:49:17.407960  379400 kubeadm.go:597] duration metric: took 36.188727108s to restartPrimaryControlPlane
	I0804 00:49:17.407972  379400 kubeadm.go:394] duration metric: took 36.33869303s to StartCluster
	I0804 00:49:17.407998  379400 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:49:17.408080  379400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:49:17.409770  379400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:49:17.410046  379400 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.118 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:49:17.410116  379400 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:49:17.410230  379400 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-055939"
	I0804 00:49:17.410262  379400 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-055939"
	W0804 00:49:17.410271  379400 addons.go:243] addon storage-provisioner should already be in state true
	I0804 00:49:17.410270  379400 config.go:182] Loaded profile config "kubernetes-upgrade-055939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 00:49:17.410275  379400 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-055939"
	I0804 00:49:17.410341  379400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-055939"
	I0804 00:49:17.410305  379400 host.go:66] Checking if "kubernetes-upgrade-055939" exists ...
	I0804 00:49:17.410865  379400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:49:17.410866  379400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:49:17.410907  379400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:49:17.410935  379400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:49:17.411630  379400 out.go:177] * Verifying Kubernetes components...
	I0804 00:49:17.412949  379400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:49:17.432109  379400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0804 00:49:17.432110  379400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
	I0804 00:49:17.432735  379400 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:49:17.432916  379400 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:49:17.433324  379400 main.go:141] libmachine: Using API Version  1
	I0804 00:49:17.433351  379400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:49:17.433475  379400 main.go:141] libmachine: Using API Version  1
	I0804 00:49:17.433500  379400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:49:17.433886  379400 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:49:17.433941  379400 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:49:17.434109  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetState
	I0804 00:49:17.434742  379400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:49:17.434779  379400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:49:17.437245  379400 kapi.go:59] client config for kubernetes-upgrade-055939: &rest.Config{Host:"https://192.168.72.118:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.crt", KeyFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kubernetes-upgrade-055939/client.key", CAFile:"/home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 00:49:17.437638  379400 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-055939"
	W0804 00:49:17.437657  379400 addons.go:243] addon default-storageclass should already be in state true
	I0804 00:49:17.437691  379400 host.go:66] Checking if "kubernetes-upgrade-055939" exists ...
	I0804 00:49:17.438122  379400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:49:17.438167  379400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:49:17.455774  379400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0804 00:49:17.456556  379400 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:49:17.457287  379400 main.go:141] libmachine: Using API Version  1
	I0804 00:49:17.457309  379400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:49:17.458041  379400 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:49:17.458310  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetState
	I0804 00:49:17.460449  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:49:17.462576  379400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0804 00:49:17.463073  379400 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:49:17.463417  379400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:49:17.463744  379400 main.go:141] libmachine: Using API Version  1
	I0804 00:49:17.463767  379400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:49:17.464287  379400 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:49:17.464769  379400 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:49:17.464791  379400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:49:17.464815  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:49:17.465344  379400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:49:17.465391  379400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:49:17.468787  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:49:17.469306  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:49:17.469351  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:49:17.470050  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:49:17.470347  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:49:17.470613  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:49:17.470936  379400 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:49:17.487046  379400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0804 00:49:17.487564  379400 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:49:17.488220  379400 main.go:141] libmachine: Using API Version  1
	I0804 00:49:17.488242  379400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:49:17.488673  379400 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:49:17.488872  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetState
	I0804 00:49:17.491113  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .DriverName
	I0804 00:49:17.491385  379400 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:49:17.491420  379400 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:49:17.491444  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHHostname
	I0804 00:49:17.495156  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:49:17.495889  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:66:f0", ip: ""} in network mk-kubernetes-upgrade-055939: {Iface:virbr2 ExpiryTime:2024-08-04 01:42:57 +0000 UTC Type:0 Mac:52:54:00:b8:66:f0 Iaid: IPaddr:192.168.72.118 Prefix:24 Hostname:kubernetes-upgrade-055939 Clientid:01:52:54:00:b8:66:f0}
	I0804 00:49:17.496041  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | domain kubernetes-upgrade-055939 has defined IP address 192.168.72.118 and MAC address 52:54:00:b8:66:f0 in network mk-kubernetes-upgrade-055939
	I0804 00:49:17.496549  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHPort
	I0804 00:49:17.496846  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHKeyPath
	I0804 00:49:17.497298  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .GetSSHUsername
	I0804 00:49:17.497496  379400 sshutil.go:53] new ssh client: &{IP:192.168.72.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/kubernetes-upgrade-055939/id_rsa Username:docker}
	I0804 00:49:17.662304  379400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:49:17.683774  379400 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:49:17.683905  379400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:49:17.700401  379400 api_server.go:72] duration metric: took 290.313674ms to wait for apiserver process to appear ...
	I0804 00:49:17.700437  379400 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:49:17.700461  379400 api_server.go:253] Checking apiserver healthz at https://192.168.72.118:8443/healthz ...
	I0804 00:49:17.707291  379400 api_server.go:279] https://192.168.72.118:8443/healthz returned 200:
	ok
	I0804 00:49:17.708248  379400 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 00:49:17.708275  379400 api_server.go:131] duration metric: took 7.830321ms to wait for apiserver health ...
	I0804 00:49:17.708284  379400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:49:17.714286  379400 system_pods.go:59] 8 kube-system pods found
	I0804 00:49:17.714321  379400 system_pods.go:61] "coredns-6f6b679f8f-6g28k" [b699fd11-060a-4414-ab72-173da96ef7ba] Running
	I0804 00:49:17.714328  379400 system_pods.go:61] "coredns-6f6b679f8f-rl7fx" [151ab159-6009-4fae-aec4-f0152acec8e8] Running
	I0804 00:49:17.714341  379400 system_pods.go:61] "etcd-kubernetes-upgrade-055939" [d2fb1bff-d113-42e9-929e-e93f3ce17e4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 00:49:17.714350  379400 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-055939" [7b41b4b6-dbe4-4fb4-982f-b1160cb2e915] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 00:49:17.714371  379400 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-055939" [5fcbe42b-ff94-4eac-b5a9-176e6cb169db] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 00:49:17.714380  379400 system_pods.go:61] "kube-proxy-xqhlp" [e269f591-e813-49e3-86b3-50a80998c9af] Running
	I0804 00:49:17.714388  379400 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-055939" [7289692b-704e-43c1-8d30-ecb7603185b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 00:49:17.714398  379400 system_pods.go:61] "storage-provisioner" [6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6] Running
	I0804 00:49:17.714407  379400 system_pods.go:74] duration metric: took 6.115888ms to wait for pod list to return data ...
	I0804 00:49:17.714425  379400 kubeadm.go:582] duration metric: took 304.344362ms to wait for: map[apiserver:true system_pods:true]
	I0804 00:49:17.714444  379400 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:49:17.717983  379400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:49:17.718018  379400 node_conditions.go:123] node cpu capacity is 2
	I0804 00:49:17.718032  379400 node_conditions.go:105] duration metric: took 3.583055ms to run NodePressure ...
	I0804 00:49:17.718049  379400 start.go:241] waiting for startup goroutines ...
	I0804 00:49:17.768421  379400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:49:17.796345  379400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:49:18.829911  379400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.033514963s)
	I0804 00:49:18.829972  379400 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:18.829990  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Close
	I0804 00:49:18.830156  379400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.061609692s)
	I0804 00:49:18.830189  379400 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:18.830201  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Close
	I0804 00:49:18.830395  379400 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:18.830415  379400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:18.830425  379400 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:18.830433  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Close
	I0804 00:49:18.830403  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Closing plugin on server side
	I0804 00:49:18.830526  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Closing plugin on server side
	I0804 00:49:18.830547  379400 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:18.830563  379400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:18.830577  379400 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:18.830586  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Close
	I0804 00:49:18.830838  379400 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:18.830855  379400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:18.830860  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) DBG | Closing plugin on server side
	I0804 00:49:18.831218  379400 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:18.831237  379400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:18.839350  379400 main.go:141] libmachine: Making call to close driver server
	I0804 00:49:18.839375  379400 main.go:141] libmachine: (kubernetes-upgrade-055939) Calling .Close
	I0804 00:49:18.839667  379400 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:49:18.839689  379400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:49:18.887634  379400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 00:49:18.890641  379400 addons.go:510] duration metric: took 1.480493569s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 00:49:18.890710  379400 start.go:246] waiting for cluster config update ...
	I0804 00:49:18.890728  379400 start.go:255] writing updated cluster config ...
	I0804 00:49:18.891052  379400 ssh_runner.go:195] Run: rm -f paused
	I0804 00:49:18.962487  379400 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 00:49:18.974195  379400 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-055939" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 00:49:19 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:19.984289604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732559984253814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c1e0d98-5585-43e5-b646-310f760261e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:19 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:19.985049431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22cb7ecd-4017-47d7-88bb-8c74091217d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:19 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:19.985123841Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22cb7ecd-4017-47d7-88bb-8c74091217d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:19 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:19.985730227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80ae31ef3aecd6961eb0517608136f4d408c17c87cdbaf75728efde7aef06f64,PodSandboxId:9cbe27a6ad52e32867dbedc424cd3bfa3b860676a37c68273d94d7cca5fed397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722732555724569893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c2abaa8e70e0fbe3063ed9227fd573bd26065ba6102b2297f4f16508df4c56,PodSandboxId:eaa9286a6e718a4bea61a8c0e2a5046950f6d1c44e7259535c809cd5523a19d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555700502686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fcd5dbe9829d45edafc33e596f1ec32ffde0e79b283a6c4e68192647543999,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555693210633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94cdc189a9ef013c39207b7227a609bd3271b5a5b2624cab9d9ce04ffd56c366,PodSandboxId:7787edf977b8e6b1807f4c23c3c87efd39e8d7b1ae0cc34d6227279fec3be06f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722732555665998488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51038aae6b62d8a91e0df2a2a2da52a3cb7d2361f517ecad9ea29b23ae15124c,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17227325526
63100337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5780889279a02e815febfe4c38fcb1a14b2d1dd36fbb9ea92ba10fdc948a5529,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722732552657456328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e3efe9809a7bd64a5bace285e606159beed7c660ced178334501f32f3a5e33,PodSandboxId:f6bedcb7a689e45bb7086bdfa843ba48fc8271702bae9c17aed92fcc814fb0c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722732551693
773022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4187c8a7a42521f2217dc04865ad7fb2868164a0c61589d92d94fe01087cdf5,PodSandboxId:4c1d4a438d49ec8abae793fc391b580adff8ff487f9b7fcedf470ccdb8ce25e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722732551708515825,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722732529121372770,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722732528094010354,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732520133255716,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53,PodSandboxId:25056d00425c2b21392b0153ab2838e08ab325c24ad959f72ba9341b17366a63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722732516810715130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2,PodSandboxId:8ba801271498ad2910de4dd4c74c20ae937d538150b3a940b9a1f45f5a1e01ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732517291394185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f,PodSandboxId:6bcc4e015f648bcadada85b801db2740b2bff0482380f201bd47264c3144ed11,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722732516691483906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30,PodSandboxId:923f47beeba813a897ced227071a98c29360740afdced254ef382cd0a4423ab6,Metadata:&Con
tainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722732516743220685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0,PodSandboxId:7c7e0dfc3581e31f9961b5fbef318dddd63c977c9bc860636c5b19a13f451906,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722732516602074292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22cb7ecd-4017-47d7-88bb-8c74091217d1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.038174164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96c2c194-2ff9-455d-b5f4-57a06bd209cb name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.038315369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96c2c194-2ff9-455d-b5f4-57a06bd209cb name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.040121174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=896b0d95-0c1f-461a-be4c-1a436cd6e45f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.040495633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732560040469014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=896b0d95-0c1f-461a-be4c-1a436cd6e45f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.041153879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d26df7f-ff07-4d64-9905-453b395a88a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.041210296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d26df7f-ff07-4d64-9905-453b395a88a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.041581298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80ae31ef3aecd6961eb0517608136f4d408c17c87cdbaf75728efde7aef06f64,PodSandboxId:9cbe27a6ad52e32867dbedc424cd3bfa3b860676a37c68273d94d7cca5fed397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722732555724569893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c2abaa8e70e0fbe3063ed9227fd573bd26065ba6102b2297f4f16508df4c56,PodSandboxId:eaa9286a6e718a4bea61a8c0e2a5046950f6d1c44e7259535c809cd5523a19d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555700502686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fcd5dbe9829d45edafc33e596f1ec32ffde0e79b283a6c4e68192647543999,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555693210633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94cdc189a9ef013c39207b7227a609bd3271b5a5b2624cab9d9ce04ffd56c366,PodSandboxId:7787edf977b8e6b1807f4c23c3c87efd39e8d7b1ae0cc34d6227279fec3be06f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722732555665998488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51038aae6b62d8a91e0df2a2a2da52a3cb7d2361f517ecad9ea29b23ae15124c,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17227325526
63100337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5780889279a02e815febfe4c38fcb1a14b2d1dd36fbb9ea92ba10fdc948a5529,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722732552657456328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e3efe9809a7bd64a5bace285e606159beed7c660ced178334501f32f3a5e33,PodSandboxId:f6bedcb7a689e45bb7086bdfa843ba48fc8271702bae9c17aed92fcc814fb0c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722732551693
773022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4187c8a7a42521f2217dc04865ad7fb2868164a0c61589d92d94fe01087cdf5,PodSandboxId:4c1d4a438d49ec8abae793fc391b580adff8ff487f9b7fcedf470ccdb8ce25e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722732551708515825,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722732529121372770,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722732528094010354,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732520133255716,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53,PodSandboxId:25056d00425c2b21392b0153ab2838e08ab325c24ad959f72ba9341b17366a63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722732516810715130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2,PodSandboxId:8ba801271498ad2910de4dd4c74c20ae937d538150b3a940b9a1f45f5a1e01ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732517291394185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f,PodSandboxId:6bcc4e015f648bcadada85b801db2740b2bff0482380f201bd47264c3144ed11,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722732516691483906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30,PodSandboxId:923f47beeba813a897ced227071a98c29360740afdced254ef382cd0a4423ab6,Metadata:&Con
tainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722732516743220685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0,PodSandboxId:7c7e0dfc3581e31f9961b5fbef318dddd63c977c9bc860636c5b19a13f451906,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722732516602074292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d26df7f-ff07-4d64-9905-453b395a88a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.090230676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=571b72d2-e3c0-4158-8246-5ffed42264ec name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.090341698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=571b72d2-e3c0-4158-8246-5ffed42264ec name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.092202549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bd4bc27-9755-4dca-8895-13f459abcf13 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.093030083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732560092955648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bd4bc27-9755-4dca-8895-13f459abcf13 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.094102115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd3b72ed-a8d3-47b4-a6b4-8f7ee6c7b832 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.094181145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd3b72ed-a8d3-47b4-a6b4-8f7ee6c7b832 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.094700594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80ae31ef3aecd6961eb0517608136f4d408c17c87cdbaf75728efde7aef06f64,PodSandboxId:9cbe27a6ad52e32867dbedc424cd3bfa3b860676a37c68273d94d7cca5fed397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722732555724569893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c2abaa8e70e0fbe3063ed9227fd573bd26065ba6102b2297f4f16508df4c56,PodSandboxId:eaa9286a6e718a4bea61a8c0e2a5046950f6d1c44e7259535c809cd5523a19d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555700502686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fcd5dbe9829d45edafc33e596f1ec32ffde0e79b283a6c4e68192647543999,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555693210633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94cdc189a9ef013c39207b7227a609bd3271b5a5b2624cab9d9ce04ffd56c366,PodSandboxId:7787edf977b8e6b1807f4c23c3c87efd39e8d7b1ae0cc34d6227279fec3be06f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722732555665998488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51038aae6b62d8a91e0df2a2a2da52a3cb7d2361f517ecad9ea29b23ae15124c,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17227325526
63100337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5780889279a02e815febfe4c38fcb1a14b2d1dd36fbb9ea92ba10fdc948a5529,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722732552657456328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e3efe9809a7bd64a5bace285e606159beed7c660ced178334501f32f3a5e33,PodSandboxId:f6bedcb7a689e45bb7086bdfa843ba48fc8271702bae9c17aed92fcc814fb0c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722732551693
773022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4187c8a7a42521f2217dc04865ad7fb2868164a0c61589d92d94fe01087cdf5,PodSandboxId:4c1d4a438d49ec8abae793fc391b580adff8ff487f9b7fcedf470ccdb8ce25e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722732551708515825,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722732529121372770,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722732528094010354,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732520133255716,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53,PodSandboxId:25056d00425c2b21392b0153ab2838e08ab325c24ad959f72ba9341b17366a63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722732516810715130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2,PodSandboxId:8ba801271498ad2910de4dd4c74c20ae937d538150b3a940b9a1f45f5a1e01ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732517291394185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f,PodSandboxId:6bcc4e015f648bcadada85b801db2740b2bff0482380f201bd47264c3144ed11,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722732516691483906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30,PodSandboxId:923f47beeba813a897ced227071a98c29360740afdced254ef382cd0a4423ab6,Metadata:&Con
tainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722732516743220685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0,PodSandboxId:7c7e0dfc3581e31f9961b5fbef318dddd63c977c9bc860636c5b19a13f451906,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722732516602074292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd3b72ed-a8d3-47b4-a6b4-8f7ee6c7b832 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.143254035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce2a8378-38ce-4cb1-b502-d37b456f9336 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.143350474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce2a8378-38ce-4cb1-b502-d37b456f9336 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.145019802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5b4d7b6-8b33-4d9c-a439-89f9eff84ec8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.145646589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732560145603409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5b4d7b6-8b33-4d9c-a439-89f9eff84ec8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.146274937Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4bee85b-707e-410f-87cb-a850fc9406c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.146343464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4bee85b-707e-410f-87cb-a850fc9406c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:49:20 kubernetes-upgrade-055939 crio[3019]: time="2024-08-04 00:49:20.146712018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80ae31ef3aecd6961eb0517608136f4d408c17c87cdbaf75728efde7aef06f64,PodSandboxId:9cbe27a6ad52e32867dbedc424cd3bfa3b860676a37c68273d94d7cca5fed397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722732555724569893,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c2abaa8e70e0fbe3063ed9227fd573bd26065ba6102b2297f4f16508df4c56,PodSandboxId:eaa9286a6e718a4bea61a8c0e2a5046950f6d1c44e7259535c809cd5523a19d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555700502686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fcd5dbe9829d45edafc33e596f1ec32ffde0e79b283a6c4e68192647543999,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732555693210633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94cdc189a9ef013c39207b7227a609bd3271b5a5b2624cab9d9ce04ffd56c366,PodSandboxId:7787edf977b8e6b1807f4c23c3c87efd39e8d7b1ae0cc34d6227279fec3be06f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722732555665998488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51038aae6b62d8a91e0df2a2a2da52a3cb7d2361f517ecad9ea29b23ae15124c,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:17227325526
63100337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5780889279a02e815febfe4c38fcb1a14b2d1dd36fbb9ea92ba10fdc948a5529,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedA
t:1722732552657456328,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e3efe9809a7bd64a5bace285e606159beed7c660ced178334501f32f3a5e33,PodSandboxId:f6bedcb7a689e45bb7086bdfa843ba48fc8271702bae9c17aed92fcc814fb0c3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722732551693
773022,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4187c8a7a42521f2217dc04865ad7fb2868164a0c61589d92d94fe01087cdf5,PodSandboxId:4c1d4a438d49ec8abae793fc391b580adff8ff487f9b7fcedf470ccdb8ce25e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722732551708515825,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520,PodSandboxId:e3e40bcd3bfa9e70d37d3c8453ff46f24228c2532702e03934672152efc465fe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722732529121372770,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c81b2931313977d5237e6926239133cd,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9,PodSandboxId:88982a316492f3b8e06a49144f8a8d781008b798c9efdfb4bcdda3861f8c7ecf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722732528094010354,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67f485f17940340e92b95d84464878b2,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447,PodSandboxId:e2a4e3ec5157b2d31642a9857dd799c7680de01b89683aff0a612a6664f5acb9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732520133255716,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rl7fx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ab159-6009-4fae-aec4-f0152acec8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53,PodSandboxId:25056d00425c2b21392b0153ab2838e08ab325c24ad959f72ba9341b17366a63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722732516810715130,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xqhlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e269f591-e813-49e3-86b3-50a80998c9af,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2,PodSandboxId:8ba801271498ad2910de4dd4c74c20ae937d538150b3a940b9a1f45f5a1e01ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732517291394185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6g28k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b699fd11-060a-4414-ab72-173da96ef7ba,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f,PodSandboxId:6bcc4e015f648bcadada85b801db2740b2bff0482380f201bd47264c3144ed11,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722732516691483906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30,PodSandboxId:923f47beeba813a897ced227071a98c29360740afdced254ef382cd0a4423ab6,Metadata:&Con
tainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722732516743220685,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0f3102189535c28467c80820fb01dd7,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0,PodSandboxId:7c7e0dfc3581e31f9961b5fbef318dddd63c977c9bc860636c5b19a13f451906,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722732516602074292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-055939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72a4cc2b81dad8b9e68546cc99e6fe01,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4bee85b-707e-410f-87cb-a850fc9406c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80ae31ef3aecd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   9cbe27a6ad52e       storage-provisioner
	26c2abaa8e70e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   eaa9286a6e718       coredns-6f6b679f8f-6g28k
	44fcd5dbe9829       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   4 seconds ago       Running             coredns                   2                   e2a4e3ec5157b       coredns-6f6b679f8f-rl7fx
	94cdc189a9ef0       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   4 seconds ago       Running             kube-proxy                2                   7787edf977b8e       kube-proxy-xqhlp
	51038aae6b62d       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   3                   88982a316492f       kube-controller-manager-kubernetes-upgrade-055939
	5780889279a02       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            3                   e3e40bcd3bfa9       kube-apiserver-kubernetes-upgrade-055939
	f4187c8a7a425       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   8 seconds ago       Running             kube-scheduler            2                   4c1d4a438d49e       kube-scheduler-kubernetes-upgrade-055939
	92e3efe9809a7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      2                   f6bedcb7a689e       etcd-kubernetes-upgrade-055939
	b395157a59964       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   31 seconds ago      Exited              kube-apiserver            2                   e3e40bcd3bfa9       kube-apiserver-kubernetes-upgrade-055939
	ea3bed7e31c60       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   32 seconds ago      Exited              kube-controller-manager   2                   88982a316492f       kube-controller-manager-kubernetes-upgrade-055939
	60d8a589a32e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago      Exited              coredns                   1                   e2a4e3ec5157b       coredns-6f6b679f8f-rl7fx
	f2e6ff1b4b12d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   8ba801271498a       coredns-6f6b679f8f-6g28k
	557f8813e2e3a       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   43 seconds ago      Exited              kube-proxy                1                   25056d00425c2       kube-proxy-xqhlp
	a005063d7c7b0       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   43 seconds ago      Exited              kube-scheduler            1                   923f47beeba81       kube-scheduler-kubernetes-upgrade-055939
	944dfce4a2a9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   43 seconds ago      Exited              storage-provisioner       1                   6bcc4e015f648       storage-provisioner
	0c9d37ab851c3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   43 seconds ago      Exited              etcd                      1                   7c7e0dfc3581e       etcd-kubernetes-upgrade-055939
	
	
	==> coredns [26c2abaa8e70e0fbe3063ed9227fd573bd26065ba6102b2297f4f16508df4c56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [44fcd5dbe9829d45edafc33e596f1ec32ffde0e79b283a6c4e68192647543999] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-055939
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-055939
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:48:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-055939
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:49:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:49:15 +0000   Sun, 04 Aug 2024 00:48:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:49:15 +0000   Sun, 04 Aug 2024 00:48:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:49:15 +0000   Sun, 04 Aug 2024 00:48:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:49:15 +0000   Sun, 04 Aug 2024 00:48:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.118
	  Hostname:    kubernetes-upgrade-055939
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3203efbce8434557a8a5f9c57530d282
	  System UUID:                3203efbc-e843-4557-a8a5-f9c57530d282
	  Boot ID:                    eb5aebc3-23a4-4af2-964b-65a9b799e08e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6g28k                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 coredns-6f6b679f8f-rl7fx                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-kubernetes-upgrade-055939                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         59s
	  kube-system                 kube-apiserver-kubernetes-upgrade-055939             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-055939    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-proxy-xqhlp                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-kubernetes-upgrade-055939             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           60s                node-controller  Node kubernetes-upgrade-055939 event: Registered Node kubernetes-upgrade-055939 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node kubernetes-upgrade-055939 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-055939 event: Registered Node kubernetes-upgrade-055939 in Controller
	
	
	==> dmesg <==
	[Aug 4 00:48] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071258] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.227800] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.129332] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +1.031740] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +5.214408] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +0.073670] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.033591] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[  +7.188265] systemd-fstab-generator[1218]: Ignoring "noauto" option for root device
	[  +0.128090] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.012396] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.542717] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.097269] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.082510] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.201701] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[  +0.198658] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +1.538932] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +2.920911] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.184085] kauditd_printk_skb: 279 callbacks suppressed
	[  +7.733979] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.919487] systemd-fstab-generator[3954]: Ignoring "noauto" option for root device
	[Aug 4 00:49] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.242348] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.606648] systemd-fstab-generator[4530]: Ignoring "noauto" option for root device
	
	
	==> etcd [0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0] <==
	{"level":"info","ts":"2024-08-04T00:48:37.359260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:48:37.359283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 received MsgVoteResp from adc6509a13463106 at term 3"}
	{"level":"info","ts":"2024-08-04T00:48:37.359296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:48:37.359310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adc6509a13463106 elected leader adc6509a13463106 at term 3"}
	{"level":"info","ts":"2024-08-04T00:48:37.363357Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"adc6509a13463106","local-member-attributes":"{Name:kubernetes-upgrade-055939 ClientURLs:[https://192.168.72.118:2379]}","request-path":"/0/members/adc6509a13463106/attributes","cluster-id":"fa04419eb9ff79c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:48:37.363399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:48:37.368081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:48:37.368834Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:48:37.369614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:48:37.381707Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:48:37.381805Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:48:37.382661Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:48:37.408672Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.118:2379"}
	{"level":"info","ts":"2024-08-04T00:48:37.556146Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T00:48:37.556187Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-055939","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.118:2380"],"advertise-client-urls":["https://192.168.72.118:2379"]}
	{"level":"warn","ts":"2024-08-04T00:48:37.556426Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:48:37.556459Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:48:37.557104Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56656","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:56656: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:48:37.576604Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.118:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T00:48:37.576674Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.118:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T00:48:37.576728Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"adc6509a13463106","current-leader-member-id":"adc6509a13463106"}
	2024/08/04 00:48:37 WARNING: [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	{"level":"info","ts":"2024-08-04T00:48:37.615450Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2024-08-04T00:48:37.615604Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2024-08-04T00:48:37.615626Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-055939","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.118:2380"],"advertise-client-urls":["https://192.168.72.118:2379"]}
	
	
	==> etcd [92e3efe9809a7bd64a5bace285e606159beed7c660ced178334501f32f3a5e33] <==
	{"level":"info","ts":"2024-08-04T00:49:11.901666Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2024-08-04T00:49:11.901831Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.118:2380"}
	{"level":"info","ts":"2024-08-04T00:49:11.903191Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"adc6509a13463106","initial-advertise-peer-urls":["https://192.168.72.118:2380"],"listen-peer-urls":["https://192.168.72.118:2380"],"advertise-client-urls":["https://192.168.72.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:49:11.903306Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:49:12.982974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-04T00:49:12.983089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:49:12.983126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 received MsgPreVoteResp from adc6509a13463106 at term 3"}
	{"level":"info","ts":"2024-08-04T00:49:12.983155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became candidate at term 4"}
	{"level":"info","ts":"2024-08-04T00:49:12.983179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 received MsgVoteResp from adc6509a13463106 at term 4"}
	{"level":"info","ts":"2024-08-04T00:49:12.983228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"adc6509a13463106 became leader at term 4"}
	{"level":"info","ts":"2024-08-04T00:49:12.983262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: adc6509a13463106 elected leader adc6509a13463106 at term 4"}
	{"level":"info","ts":"2024-08-04T00:49:12.991112Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"adc6509a13463106","local-member-attributes":"{Name:kubernetes-upgrade-055939 ClientURLs:[https://192.168.72.118:2379]}","request-path":"/0/members/adc6509a13463106/attributes","cluster-id":"fa04419eb9ff79c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:49:12.991204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:49:12.993031Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:49:12.993798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:49:12.996644Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T00:49:12.993850Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T00:49:12.997456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.118:2379"}
	{"level":"info","ts":"2024-08-04T00:49:13.001979Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:49:13.002025Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:49:18.776436Z","caller":"traceutil/trace.go:171","msg":"trace[2130698981] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"105.352564ms","start":"2024-08-04T00:49:18.671068Z","end":"2024-08-04T00:49:18.776420Z","steps":["trace[2130698981] 'process raft request'  (duration: 40.619503ms)","trace[2130698981] 'compare'  (duration: 64.40026ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T00:49:18.974123Z","caller":"traceutil/trace.go:171","msg":"trace[1570270499] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"104.396132ms","start":"2024-08-04T00:49:18.869708Z","end":"2024-08-04T00:49:18.974104Z","steps":["trace[1570270499] 'process raft request'  (duration: 93.461386ms)","trace[1570270499] 'compare'  (duration: 10.638666ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T00:49:19.009291Z","caller":"traceutil/trace.go:171","msg":"trace[965138418] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"131.330691ms","start":"2024-08-04T00:49:18.877946Z","end":"2024-08-04T00:49:19.009276Z","steps":["trace[965138418] 'process raft request'  (duration: 131.253415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T00:49:19.240775Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.150516ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3532670502275304388 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:417 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3994 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-04T00:49:19.241454Z","caller":"traceutil/trace.go:171","msg":"trace[597620404] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"216.775909ms","start":"2024-08-04T00:49:19.024656Z","end":"2024-08-04T00:49:19.241432Z","steps":["trace[597620404] 'process raft request'  (duration: 84.408443ms)","trace[597620404] 'compare'  (duration: 130.388735ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:49:21 up 1 min,  0 users,  load average: 1.16, 0.39, 0.14
	Linux kubernetes-upgrade-055939 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5780889279a02e815febfe4c38fcb1a14b2d1dd36fbb9ea92ba10fdc948a5529] <==
	I0804 00:49:15.035501       1 aggregator.go:171] initial CRD sync complete...
	I0804 00:49:15.035545       1 autoregister_controller.go:144] Starting autoregister controller
	I0804 00:49:15.035553       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:49:15.135834       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:49:15.152188       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0804 00:49:15.152255       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0804 00:49:15.152273       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0804 00:49:15.153482       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0804 00:49:15.153579       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:49:15.155056       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:49:15.153596       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:49:15.159540       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0804 00:49:15.164061       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 00:49:15.168393       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:49:15.187186       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:49:15.187237       1 policy_source.go:224] refreshing policies
	I0804 00:49:15.204167       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:49:15.990890       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:49:16.092476       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 00:49:17.109530       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:49:17.132655       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:49:17.225652       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:49:17.287782       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:49:17.316414       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:49:18.471585       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520] <==
	I0804 00:48:49.291058       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0804 00:48:50.538464       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:50.538545       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 00:48:50.538579       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0804 00:48:50.543307       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:48:50.547011       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0804 00:48:50.547108       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 00:48:50.547326       1 instance.go:232] Using reconciler: lease
	W0804 00:48:50.548590       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:51.539975       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:51.539985       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:51.549328       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:52.976548       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:53.153122       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:53.167973       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:55.239535       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:55.590459       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:56.155146       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:59.218514       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:48:59.524499       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:49:00.198651       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:49:05.571065       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:49:05.729469       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 00:49:05.804871       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 00:49:10.548517       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [51038aae6b62d8a91e0df2a2a2da52a3cb7d2361f517ecad9ea29b23ae15124c] <==
	I0804 00:49:18.489501       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:49:18.489620       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0804 00:49:18.508783       1 shared_informer.go:320] Caches are synced for expand
	I0804 00:49:18.514220       1 shared_informer.go:320] Caches are synced for crt configmap
	I0804 00:49:18.517317       1 shared_informer.go:320] Caches are synced for cronjob
	I0804 00:49:18.517388       1 shared_informer.go:320] Caches are synced for ephemeral
	I0804 00:49:18.518539       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:49:18.527538       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:49:18.532453       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0804 00:49:18.536617       1 shared_informer.go:320] Caches are synced for job
	I0804 00:49:18.536689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:49:18.603970       1 shared_informer.go:320] Caches are synced for taint
	I0804 00:49:18.604146       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0804 00:49:18.604374       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-055939"
	I0804 00:49:18.604445       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 00:49:18.667388       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:49:18.669984       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:49:18.689725       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:49:18.696178       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:49:18.716250       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:49:18.978804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="308.692765ms"
	I0804 00:49:18.978993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="124.828µs"
	I0804 00:49:19.114044       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:49:19.114127       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:49:19.156069       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9] <==
	I0804 00:48:48.525220       1 serving.go:386] Generated self-signed cert in-memory
	I0804 00:48:49.083601       1 controllermanager.go:197] "Starting" version="v1.31.0-rc.0"
	I0804 00:48:49.083645       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:48:49.086039       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0804 00:48:49.087293       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 00:48:49.087431       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0804 00:48:49.087513       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0804 00:49:10.091591       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.118:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53] <==
	
	
	==> kube-proxy [94cdc189a9ef013c39207b7227a609bd3271b5a5b2624cab9d9ce04ffd56c366] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 00:49:16.255712       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0804 00:49:16.267112       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.118"]
	E0804 00:49:16.267959       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 00:49:16.341074       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 00:49:16.341190       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:49:16.341254       1 server_linux.go:169] "Using iptables Proxier"
	I0804 00:49:16.345574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 00:49:16.346290       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 00:49:16.346708       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:49:16.349352       1 config.go:197] "Starting service config controller"
	I0804 00:49:16.349456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:49:16.349698       1 config.go:104] "Starting endpoint slice config controller"
	I0804 00:49:16.349810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:49:16.351664       1 config.go:326] "Starting node config controller"
	I0804 00:49:16.353981       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:49:16.450548       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:49:16.450640       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:49:16.454404       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30] <==
	
	
	==> kube-scheduler [f4187c8a7a42521f2217dc04865ad7fb2868164a0c61589d92d94fe01087cdf5] <==
	W0804 00:49:15.080679       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 00:49:15.080749       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0804 00:49:15.081065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 00:49:15.081113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.081212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 00:49:15.081254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.081459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 00:49:15.081490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.081871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 00:49:15.081953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.081967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 00:49:15.081975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.082481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 00:49:15.084161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.084380       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 00:49:15.084438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.084598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 00:49:15.084637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.084829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 00:49:15.084865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.086239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0804 00:49:15.086320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0804 00:49:15.086410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 00:49:15.086441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0804 00:49:17.324123       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 04 00:49:11 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:11.681127    3961 scope.go:117] "RemoveContainer" containerID="0c9d37ab851c3786fe58143897e885b2cad8d360e7633c5770c7e53130abdda0"
	Aug 04 00:49:11 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:11.689833    3961 scope.go:117] "RemoveContainer" containerID="a005063d7c7b0f9b0828cafb914e22e36e5bbe03fa3956d7a72e9bd4968e5d30"
	Aug 04 00:49:11 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:11.758000    3961 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-055939"
	Aug 04 00:49:11 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:11.759039    3961 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.118:8443: connect: connection refused" node="kubernetes-upgrade-055939"
	Aug 04 00:49:11 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:11.957609    3961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-055939?timeout=10s\": dial tcp 192.168.72.118:8443: connect: connection refused" interval="800ms"
	Aug 04 00:49:12 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:12.439674    3961 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732552439174437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:49:12 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:12.440065    3961 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732552439174437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 04 00:49:12 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:12.637846    3961 scope.go:117] "RemoveContainer" containerID="ea3bed7e31c60f82e4a3fcac69e2b39ebb26b97bc99eb65343a7b229480c99c9"
	Aug 04 00:49:12 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:12.638474    3961 scope.go:117] "RemoveContainer" containerID="b395157a599646c44aeb3e1516fe475a808d087003548c0088ad200eb0e32520"
	Aug 04 00:49:12 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:12.759186    3961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-055939?timeout=10s\": dial tcp 192.168.72.118:8443: connect: connection refused" interval="1.6s"
	Aug 04 00:49:13 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:13.360657    3961 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-055939"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.276518    3961 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-055939"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.276643    3961 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-055939"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.276677    3961 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.278293    3961 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.319022    3961 apiserver.go:52] "Watching apiserver"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.344589    3961 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.394011    3961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6-tmp\") pod \"storage-provisioner\" (UID: \"6cb0f79a-ab70-48e3-b24d-170d3e4b0fe6\") " pod="kube-system/storage-provisioner"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.394130    3961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e269f591-e813-49e3-86b3-50a80998c9af-xtables-lock\") pod \"kube-proxy-xqhlp\" (UID: \"e269f591-e813-49e3-86b3-50a80998c9af\") " pod="kube-system/kube-proxy-xqhlp"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.394159    3961 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e269f591-e813-49e3-86b3-50a80998c9af-lib-modules\") pod \"kube-proxy-xqhlp\" (UID: \"e269f591-e813-49e3-86b3-50a80998c9af\") " pod="kube-system/kube-proxy-xqhlp"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.624757    3961 scope.go:117] "RemoveContainer" containerID="557f8813e2e3ab13c095262f5c1e52c1aad17506112341b36a6dd53e89fedc53"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.625232    3961 scope.go:117] "RemoveContainer" containerID="60d8a589a32e85972104d80a804aab19d57400c26bc1edfbdc0ad2db28c33447"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.625566    3961 scope.go:117] "RemoveContainer" containerID="f2e6ff1b4b12d6dbb13a281a15fa032aa4a06ef33447c3513e6f1f07da7165e2"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: I0804 00:49:15.625834    3961 scope.go:117] "RemoveContainer" containerID="944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f"
	Aug 04 00:49:15 kubernetes-upgrade-055939 kubelet[3961]: E0804 00:49:15.736456    3961 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-055939\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-055939"
	
	
	==> storage-provisioner [80ae31ef3aecd6961eb0517608136f4d408c17c87cdbaf75728efde7aef06f64] <==
	I0804 00:49:16.045280       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 00:49:16.076510       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 00:49:16.079127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 00:49:16.105266       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 00:49:16.105504       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-055939_8e5e0e02-3a12-46b8-ad81-2c061efa323d!
	I0804 00:49:16.107192       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad09be3c-e702-4a29-a22a-afaa046c2d5a", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-055939_8e5e0e02-3a12-46b8-ad81-2c061efa323d became leader
	I0804 00:49:16.206077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-055939_8e5e0e02-3a12-46b8-ad81-2c061efa323d!
	
	
	==> storage-provisioner [944dfce4a2a9a617542a44d269cd17f45c38d67eaf9867b6df760dd060e1b55f] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-055939 -n kubernetes-upgrade-055939
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-055939 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-055939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-055939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-055939: (1.289890221s)
--- FAIL: TestKubernetesUpgrade (466.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-026475 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-026475 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.276048457s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-026475] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-026475" primary control-plane node in "pause-026475" cluster
	* Updating the running kvm2 "pause-026475" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-026475" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:46:12.266869  377334 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:46:12.267012  377334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:12.267022  377334 out.go:304] Setting ErrFile to fd 2...
	I0804 00:46:12.267028  377334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:12.267266  377334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:46:12.267885  377334 out.go:298] Setting JSON to false
	I0804 00:46:12.268972  377334 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34120,"bootTime":1722698252,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:46:12.269046  377334 start.go:139] virtualization: kvm guest
	I0804 00:46:12.271285  377334 out.go:177] * [pause-026475] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:46:12.272906  377334 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:46:12.272978  377334 notify.go:220] Checking for updates...
	I0804 00:46:12.275769  377334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:46:12.277562  377334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:46:12.278990  377334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:12.280448  377334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:46:12.281726  377334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:46:12.283393  377334 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:12.283883  377334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:46:12.283945  377334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:46:12.300000  377334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I0804 00:46:12.300584  377334 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:46:12.301248  377334 main.go:141] libmachine: Using API Version  1
	I0804 00:46:12.301276  377334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:46:12.301789  377334 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:46:12.302063  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:12.302409  377334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:46:12.302779  377334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:46:12.302841  377334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:46:12.318518  377334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0804 00:46:12.319094  377334 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:46:12.319657  377334 main.go:141] libmachine: Using API Version  1
	I0804 00:46:12.319681  377334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:46:12.320048  377334 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:46:12.320310  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:12.358657  377334 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 00:46:12.359962  377334 start.go:297] selected driver: kvm2
	I0804 00:46:12.359973  377334 start.go:901] validating driver "kvm2" against &{Name:pause-026475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-026475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:46:12.360146  377334 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:46:12.360551  377334 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:12.360640  377334 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:46:12.376960  377334 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:46:12.377988  377334 cni.go:84] Creating CNI manager for ""
	I0804 00:46:12.378008  377334 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:46:12.378085  377334 start.go:340] cluster config:
	{Name:pause-026475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-026475 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:46:12.378274  377334 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:12.380138  377334 out.go:177] * Starting "pause-026475" primary control-plane node in "pause-026475" cluster
	I0804 00:46:12.381321  377334 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:46:12.381363  377334 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:46:12.381380  377334 cache.go:56] Caching tarball of preloaded images
	I0804 00:46:12.381472  377334 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:46:12.381483  377334 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:46:12.381639  377334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/config.json ...
	I0804 00:46:12.381860  377334 start.go:360] acquireMachinesLock for pause-026475: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:46:27.242462  377334 start.go:364] duration metric: took 14.860541722s to acquireMachinesLock for "pause-026475"
	I0804 00:46:27.242538  377334 start.go:96] Skipping create...Using existing machine configuration
	I0804 00:46:27.242550  377334 fix.go:54] fixHost starting: 
	I0804 00:46:27.242983  377334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:46:27.243039  377334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:46:27.261122  377334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I0804 00:46:27.261622  377334 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:46:27.262216  377334 main.go:141] libmachine: Using API Version  1
	I0804 00:46:27.262248  377334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:46:27.262577  377334 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:46:27.262752  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:27.262891  377334 main.go:141] libmachine: (pause-026475) Calling .GetState
	I0804 00:46:27.264635  377334 fix.go:112] recreateIfNeeded on pause-026475: state=Running err=<nil>
	W0804 00:46:27.264660  377334 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 00:46:27.266823  377334 out.go:177] * Updating the running kvm2 "pause-026475" VM ...
	I0804 00:46:27.268207  377334 machine.go:94] provisionDockerMachine start ...
	I0804 00:46:27.268247  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:27.268482  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:27.271687  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.272191  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.272232  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.272427  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:27.272636  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.272794  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.272944  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:27.273124  377334 main.go:141] libmachine: Using SSH client type: native
	I0804 00:46:27.273339  377334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.154 22 <nil> <nil>}
	I0804 00:46:27.273350  377334 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 00:46:27.390684  377334 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-026475
	
	I0804 00:46:27.390713  377334 main.go:141] libmachine: (pause-026475) Calling .GetMachineName
	I0804 00:46:27.390977  377334 buildroot.go:166] provisioning hostname "pause-026475"
	I0804 00:46:27.391010  377334 main.go:141] libmachine: (pause-026475) Calling .GetMachineName
	I0804 00:46:27.391185  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:27.394080  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.394442  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.394471  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.394659  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:27.394884  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.395081  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.395237  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:27.395508  377334 main.go:141] libmachine: Using SSH client type: native
	I0804 00:46:27.395743  377334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.154 22 <nil> <nil>}
	I0804 00:46:27.395758  377334 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-026475 && echo "pause-026475" | sudo tee /etc/hostname
	I0804 00:46:27.522482  377334 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-026475
	
	I0804 00:46:27.522521  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:27.525741  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.526213  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.526237  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.526439  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:27.526638  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.526815  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.526978  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:27.527185  377334 main.go:141] libmachine: Using SSH client type: native
	I0804 00:46:27.527420  377334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.154 22 <nil> <nil>}
	I0804 00:46:27.527444  377334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-026475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-026475/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-026475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:46:27.639081  377334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:46:27.639125  377334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19370-323890/.minikube CaCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19370-323890/.minikube}
	I0804 00:46:27.639152  377334 buildroot.go:174] setting up certificates
	I0804 00:46:27.639166  377334 provision.go:84] configureAuth start
	I0804 00:46:27.639180  377334 main.go:141] libmachine: (pause-026475) Calling .GetMachineName
	I0804 00:46:27.639525  377334 main.go:141] libmachine: (pause-026475) Calling .GetIP
	I0804 00:46:27.642541  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.642980  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.643006  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.643211  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:27.645960  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.646382  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.646412  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.646581  377334 provision.go:143] copyHostCerts
	I0804 00:46:27.646667  377334 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem, removing ...
	I0804 00:46:27.646692  377334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem
	I0804 00:46:27.646780  377334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/key.pem (1675 bytes)
	I0804 00:46:27.646948  377334 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem, removing ...
	I0804 00:46:27.646964  377334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem
	I0804 00:46:27.646999  377334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/ca.pem (1078 bytes)
	I0804 00:46:27.647098  377334 exec_runner.go:144] found /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem, removing ...
	I0804 00:46:27.647108  377334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem
	I0804 00:46:27.647136  377334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19370-323890/.minikube/cert.pem (1123 bytes)
	I0804 00:46:27.647215  377334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem org=jenkins.pause-026475 san=[127.0.0.1 192.168.61.154 localhost minikube pause-026475]
	I0804 00:46:27.858744  377334 provision.go:177] copyRemoteCerts
	I0804 00:46:27.858815  377334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:46:27.858843  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:27.861814  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.862180  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:27.862209  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:27.862368  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:27.862546  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:27.862734  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:27.862854  377334 sshutil.go:53] new ssh client: &{IP:192.168.61.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/pause-026475/id_rsa Username:docker}
	I0804 00:46:27.953288  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0804 00:46:27.984602  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:46:28.010905  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:46:28.041383  377334 provision.go:87] duration metric: took 402.199746ms to configureAuth
	I0804 00:46:28.041420  377334 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:46:28.041692  377334 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:28.041778  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:28.044708  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:28.045173  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:28.045219  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:28.045624  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:28.045890  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:28.046125  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:28.046326  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:28.046575  377334 main.go:141] libmachine: Using SSH client type: native
	I0804 00:46:28.046815  377334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.154 22 <nil> <nil>}
	I0804 00:46:28.046838  377334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:46:33.675369  377334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:46:33.675395  377334 machine.go:97] duration metric: took 6.407161388s to provisionDockerMachine
	I0804 00:46:33.675408  377334 start.go:293] postStartSetup for "pause-026475" (driver="kvm2")
	I0804 00:46:33.675418  377334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:46:33.675452  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:33.675875  377334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:46:33.675911  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:33.679085  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.679491  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:33.679519  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.679669  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:33.679871  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:33.680067  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:33.680196  377334 sshutil.go:53] new ssh client: &{IP:192.168.61.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/pause-026475/id_rsa Username:docker}
	I0804 00:46:33.770726  377334 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:46:33.777265  377334 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:46:33.777300  377334 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/addons for local assets ...
	I0804 00:46:33.777390  377334 filesync.go:126] Scanning /home/jenkins/minikube-integration/19370-323890/.minikube/files for local assets ...
	I0804 00:46:33.777500  377334 filesync.go:149] local asset: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem -> 3310972.pem in /etc/ssl/certs
	I0804 00:46:33.777651  377334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 00:46:33.788404  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:46:33.816352  377334 start.go:296] duration metric: took 140.928497ms for postStartSetup
	I0804 00:46:33.816415  377334 fix.go:56] duration metric: took 6.573848928s for fixHost
	I0804 00:46:33.816442  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:33.819290  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.819715  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:33.819749  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.819916  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:33.820142  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:33.820342  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:33.820526  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:33.820749  377334 main.go:141] libmachine: Using SSH client type: native
	I0804 00:46:33.820939  377334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.154 22 <nil> <nil>}
	I0804 00:46:33.820953  377334 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:46:33.939239  377334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722732393.931720064
	
	I0804 00:46:33.939268  377334 fix.go:216] guest clock: 1722732393.931720064
	I0804 00:46:33.939280  377334 fix.go:229] Guest: 2024-08-04 00:46:33.931720064 +0000 UTC Remote: 2024-08-04 00:46:33.816420467 +0000 UTC m=+21.597501200 (delta=115.299597ms)
	I0804 00:46:33.939310  377334 fix.go:200] guest clock delta is within tolerance: 115.299597ms
	I0804 00:46:33.939319  377334 start.go:83] releasing machines lock for "pause-026475", held for 6.696823552s
	I0804 00:46:33.939346  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:33.939660  377334 main.go:141] libmachine: (pause-026475) Calling .GetIP
	I0804 00:46:33.942392  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.942844  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:33.942874  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.943077  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:33.943754  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:33.943963  377334 main.go:141] libmachine: (pause-026475) Calling .DriverName
	I0804 00:46:33.944076  377334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:46:33.944140  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:33.944433  377334 ssh_runner.go:195] Run: cat /version.json
	I0804 00:46:33.944460  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHHostname
	I0804 00:46:33.947369  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.947710  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.947835  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:33.947863  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.948047  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:33.948151  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:33.948177  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:33.948232  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:33.948316  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHPort
	I0804 00:46:33.948393  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:33.948463  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHKeyPath
	I0804 00:46:33.948604  377334 sshutil.go:53] new ssh client: &{IP:192.168.61.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/pause-026475/id_rsa Username:docker}
	I0804 00:46:33.948717  377334 main.go:141] libmachine: (pause-026475) Calling .GetSSHUsername
	I0804 00:46:33.948886  377334 sshutil.go:53] new ssh client: &{IP:192.168.61.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/pause-026475/id_rsa Username:docker}
	I0804 00:46:34.036390  377334 ssh_runner.go:195] Run: systemctl --version
	I0804 00:46:34.061302  377334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:46:34.268555  377334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:46:34.275335  377334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:46:34.275419  377334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:46:34.287358  377334 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 00:46:34.287388  377334 start.go:495] detecting cgroup driver to use...
	I0804 00:46:34.287467  377334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:46:34.327988  377334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:46:34.349349  377334 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:46:34.349405  377334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:46:34.446463  377334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:46:34.535482  377334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:46:34.831152  377334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:46:35.074772  377334 docker.go:233] disabling docker service ...
	I0804 00:46:35.074856  377334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:46:35.188313  377334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:46:35.229164  377334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:46:35.531755  377334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:46:35.783190  377334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:46:35.800851  377334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:46:35.832909  377334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:46:35.832987  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.851119  377334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:46:35.851219  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.864264  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.883306  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.899758  377334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:46:35.918640  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.935674  377334 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.951728  377334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:46:35.970860  377334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:46:36.008572  377334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:46:36.065201  377334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:46:36.318252  377334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:46:37.036816  377334 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:46:37.036891  377334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:46:37.043243  377334 start.go:563] Will wait 60s for crictl version
	I0804 00:46:37.043329  377334 ssh_runner.go:195] Run: which crictl
	I0804 00:46:37.050695  377334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:46:37.090021  377334 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:46:37.090120  377334 ssh_runner.go:195] Run: crio --version
	I0804 00:46:37.134344  377334 ssh_runner.go:195] Run: crio --version
	I0804 00:46:37.239153  377334 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:46:37.293378  377334 main.go:141] libmachine: (pause-026475) Calling .GetIP
	I0804 00:46:37.296703  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:37.297135  377334 main.go:141] libmachine: (pause-026475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:55:e9", ip: ""} in network mk-pause-026475: {Iface:virbr3 ExpiryTime:2024-08-04 01:45:28 +0000 UTC Type:0 Mac:52:54:00:52:55:e9 Iaid: IPaddr:192.168.61.154 Prefix:24 Hostname:pause-026475 Clientid:01:52:54:00:52:55:e9}
	I0804 00:46:37.297170  377334 main.go:141] libmachine: (pause-026475) DBG | domain pause-026475 has defined IP address 192.168.61.154 and MAC address 52:54:00:52:55:e9 in network mk-pause-026475
	I0804 00:46:37.297418  377334 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 00:46:37.331559  377334 kubeadm.go:883] updating cluster {Name:pause-026475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-026475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:46:37.331806  377334 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:46:37.331880  377334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:46:37.509066  377334 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:46:37.509094  377334 crio.go:433] Images already preloaded, skipping extraction
	I0804 00:46:37.509154  377334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:46:37.597399  377334 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:46:37.597426  377334 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:46:37.597435  377334 kubeadm.go:934] updating node { 192.168.61.154 8443 v1.30.3 crio true true} ...
	I0804 00:46:37.597563  377334 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-026475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-026475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:46:37.597651  377334 ssh_runner.go:195] Run: crio config
	I0804 00:46:37.824380  377334 cni.go:84] Creating CNI manager for ""
	I0804 00:46:37.824413  377334 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:46:37.824428  377334 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:46:37.824454  377334 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.154 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-026475 NodeName:pause-026475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:46:37.824631  377334 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.154
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-026475"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.154
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.154"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:46:37.824706  377334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:46:37.849159  377334 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:46:37.849248  377334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:46:37.881678  377334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0804 00:46:37.926685  377334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:46:37.954613  377334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0804 00:46:37.979669  377334 ssh_runner.go:195] Run: grep 192.168.61.154	control-plane.minikube.internal$ /etc/hosts
	I0804 00:46:37.984613  377334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:46:38.158186  377334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:46:38.174738  377334 certs.go:68] Setting up /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475 for IP: 192.168.61.154
	I0804 00:46:38.174766  377334 certs.go:194] generating shared ca certs ...
	I0804 00:46:38.174788  377334 certs.go:226] acquiring lock for ca certs: {Name:mkccfc094433554f42346833b79ab0114e5aec2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:38.175008  377334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key
	I0804 00:46:38.175075  377334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key
	I0804 00:46:38.175089  377334 certs.go:256] generating profile certs ...
	I0804 00:46:38.175198  377334 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/client.key
	I0804 00:46:38.175277  377334 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/apiserver.key.bd7bcfcb
	I0804 00:46:38.175334  377334 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/proxy-client.key
	I0804 00:46:38.175478  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem (1338 bytes)
	W0804 00:46:38.175519  377334 certs.go:480] ignoring /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097_empty.pem, impossibly tiny 0 bytes
	I0804 00:46:38.175533  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca-key.pem (1679 bytes)
	I0804 00:46:38.175575  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem (1078 bytes)
	I0804 00:46:38.175608  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:46:38.175641  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/certs/key.pem (1675 bytes)
	I0804 00:46:38.175694  377334 certs.go:484] found cert: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem (1708 bytes)
	I0804 00:46:38.176607  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:46:38.203872  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0804 00:46:38.232464  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:46:38.260879  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0804 00:46:38.288388  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 00:46:38.319355  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:46:38.355234  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:46:38.417597  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/pause-026475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:46:38.446208  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/ssl/certs/3310972.pem --> /usr/share/ca-certificates/3310972.pem (1708 bytes)
	I0804 00:46:38.473448  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:46:38.499573  377334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19370-323890/.minikube/certs/331097.pem --> /usr/share/ca-certificates/331097.pem (1338 bytes)
	I0804 00:46:38.524901  377334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:46:38.543438  377334 ssh_runner.go:195] Run: openssl version
	I0804 00:46:38.550057  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/331097.pem && ln -fs /usr/share/ca-certificates/331097.pem /etc/ssl/certs/331097.pem"
	I0804 00:46:38.562137  377334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/331097.pem
	I0804 00:46:38.567125  377334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 23:44 /usr/share/ca-certificates/331097.pem
	I0804 00:46:38.567201  377334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/331097.pem
	I0804 00:46:38.573322  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/331097.pem /etc/ssl/certs/51391683.0"
	I0804 00:46:38.583423  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3310972.pem && ln -fs /usr/share/ca-certificates/3310972.pem /etc/ssl/certs/3310972.pem"
	I0804 00:46:38.595440  377334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3310972.pem
	I0804 00:46:38.600605  377334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 23:44 /usr/share/ca-certificates/3310972.pem
	I0804 00:46:38.600678  377334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3310972.pem
	I0804 00:46:38.606980  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3310972.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 00:46:38.617176  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:46:38.629293  377334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:46:38.634501  377334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 23:03 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:46:38.634560  377334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:46:38.640652  377334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:46:38.651117  377334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:46:38.655888  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 00:46:38.662187  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 00:46:38.668054  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 00:46:38.675378  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 00:46:38.683127  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 00:46:38.688954  377334 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 00:46:38.695424  377334 kubeadm.go:392] StartCluster: {Name:pause-026475 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-026475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:46:38.695546  377334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:46:38.695617  377334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:46:38.732058  377334 cri.go:89] found id: "efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9"
	I0804 00:46:38.732081  377334 cri.go:89] found id: "38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9"
	I0804 00:46:38.732086  377334 cri.go:89] found id: "3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671"
	I0804 00:46:38.732090  377334 cri.go:89] found id: "c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725"
	I0804 00:46:38.732094  377334 cri.go:89] found id: "9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd"
	I0804 00:46:38.732099  377334 cri.go:89] found id: "8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90"
	I0804 00:46:38.732102  377334 cri.go:89] found id: ""
	I0804 00:46:38.732161  377334 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-026475 -n pause-026475
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-026475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-026475 logs -n 25: (1.364954943s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-055939       | kubernetes-upgrade-055939 | jenkins | v1.33.1 | 04 Aug 24 00:41 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-439963        | force-systemd-env-439963  | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| start   | -p stopped-upgrade-742754          | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:44 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-404249             | offline-crio-404249       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	| start   | -p running-upgrade-380850          | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:44 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:44 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-742754 stop        | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:44 UTC |
	| start   | -p stopped-upgrade-742754          | stopped-upgrade-742754    | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-419151 sudo        | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-380850          | running-upgrade-380850    | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:44 UTC |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-419151 sudo        | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p pause-026475 --memory=2048      | pause-026475              | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742754          | stopped-upgrade-742754    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p cert-expiration-443385          | cert-expiration-443385    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-380850          | running-upgrade-380850    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p force-systemd-flag-040288       | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-026475                    | pause-026475              | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:47 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-040288 ssh cat  | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:46 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-040288       | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:46 UTC |
	| start   | -p cert-options-941979             | cert-options-941979       | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:46:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:46:50.730276  377783 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:46:50.730505  377783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:50.730508  377783 out.go:304] Setting ErrFile to fd 2...
	I0804 00:46:50.730512  377783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:50.730724  377783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:46:50.731344  377783 out.go:298] Setting JSON to false
	I0804 00:46:50.732286  377783 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34159,"bootTime":1722698252,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:46:50.732346  377783 start.go:139] virtualization: kvm guest
	I0804 00:46:50.734456  377783 out.go:177] * [cert-options-941979] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:46:50.735778  377783 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:46:50.735816  377783 notify.go:220] Checking for updates...
	I0804 00:46:50.738436  377783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:46:50.739855  377783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:46:50.741174  377783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:50.742591  377783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:46:50.743932  377783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:46:50.745601  377783 config.go:182] Loaded profile config "cert-expiration-443385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:50.745708  377783 config.go:182] Loaded profile config "kubernetes-upgrade-055939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:46:50.745813  377783 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:50.745904  377783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:46:50.785017  377783 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:46:50.786423  377783 start.go:297] selected driver: kvm2
	I0804 00:46:50.786432  377783 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:46:50.786442  377783 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:46:50.787194  377783 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:50.787275  377783 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:46:50.803588  377783 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:46:50.803634  377783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:46:50.803962  377783 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:46:50.803986  377783 cni.go:84] Creating CNI manager for ""
	I0804 00:46:50.803993  377783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:46:50.804001  377783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:46:50.804075  377783 start.go:340] cluster config:
	{Name:cert-options-941979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-941979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0804 00:46:50.804173  377783 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:50.807090  377783 out.go:177] * Starting "cert-options-941979" primary control-plane node in "cert-options-941979" cluster
	I0804 00:46:50.808586  377783 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:46:50.808623  377783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:46:50.808631  377783 cache.go:56] Caching tarball of preloaded images
	I0804 00:46:50.808716  377783 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:46:50.808723  377783 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:46:50.808824  377783 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/cert-options-941979/config.json ...
	I0804 00:46:50.808837  377783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/cert-options-941979/config.json: {Name:mk8fa0a29f7dfb8a38d63405815015fc69a976db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:50.808965  377783 start.go:360] acquireMachinesLock for cert-options-941979: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:46:50.808998  377783 start.go:364] duration metric: took 15.511µs to acquireMachinesLock for "cert-options-941979"
	I0804 00:46:50.809013  377783 start.go:93] Provisioning new machine with config: &{Name:cert-options-941979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-941979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:46:50.809084  377783 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:46:49.111016  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:51.112577  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:50.810762  377783 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0804 00:46:50.810941  377783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:46:50.810973  377783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:46:50.826271  377783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0804 00:46:50.826806  377783 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:46:50.827388  377783 main.go:141] libmachine: Using API Version  1
	I0804 00:46:50.827407  377783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:46:50.827878  377783 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:46:50.828080  377783 main.go:141] libmachine: (cert-options-941979) Calling .GetMachineName
	I0804 00:46:50.828243  377783 main.go:141] libmachine: (cert-options-941979) Calling .DriverName
	I0804 00:46:50.828384  377783 start.go:159] libmachine.API.Create for "cert-options-941979" (driver="kvm2")
	I0804 00:46:50.828410  377783 client.go:168] LocalClient.Create starting
	I0804 00:46:50.828445  377783 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0804 00:46:50.828484  377783 main.go:141] libmachine: Decoding PEM data...
	I0804 00:46:50.828500  377783 main.go:141] libmachine: Parsing certificate...
	I0804 00:46:50.828562  377783 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0804 00:46:50.828578  377783 main.go:141] libmachine: Decoding PEM data...
	I0804 00:46:50.828594  377783 main.go:141] libmachine: Parsing certificate...
	I0804 00:46:50.828613  377783 main.go:141] libmachine: Running pre-create checks...
	I0804 00:46:50.828618  377783 main.go:141] libmachine: (cert-options-941979) Calling .PreCreateCheck
	I0804 00:46:50.828993  377783 main.go:141] libmachine: (cert-options-941979) Calling .GetConfigRaw
	I0804 00:46:50.829420  377783 main.go:141] libmachine: Creating machine...
	I0804 00:46:50.829427  377783 main.go:141] libmachine: (cert-options-941979) Calling .Create
	I0804 00:46:50.829570  377783 main.go:141] libmachine: (cert-options-941979) Creating KVM machine...
	I0804 00:46:50.830782  377783 main.go:141] libmachine: (cert-options-941979) DBG | found existing default KVM network
	I0804 00:46:50.833391  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.833205  377806 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0804 00:46:50.834425  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.834342  377806 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:97:76} reservation:<nil>}
	I0804 00:46:50.835290  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.835209  377806 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:9e:10} reservation:<nil>}
	I0804 00:46:50.836033  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.835955  377806 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:2a:56} reservation:<nil>}
	I0804 00:46:50.837279  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.837193  377806 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003b0610}
	I0804 00:46:50.837326  377783 main.go:141] libmachine: (cert-options-941979) DBG | created network xml: 
	I0804 00:46:50.837337  377783 main.go:141] libmachine: (cert-options-941979) DBG | <network>
	I0804 00:46:50.837356  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <name>mk-cert-options-941979</name>
	I0804 00:46:50.837360  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <dns enable='no'/>
	I0804 00:46:50.837365  377783 main.go:141] libmachine: (cert-options-941979) DBG |   
	I0804 00:46:50.837371  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0804 00:46:50.837375  377783 main.go:141] libmachine: (cert-options-941979) DBG |     <dhcp>
	I0804 00:46:50.837379  377783 main.go:141] libmachine: (cert-options-941979) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0804 00:46:50.837384  377783 main.go:141] libmachine: (cert-options-941979) DBG |     </dhcp>
	I0804 00:46:50.837388  377783 main.go:141] libmachine: (cert-options-941979) DBG |   </ip>
	I0804 00:46:50.837392  377783 main.go:141] libmachine: (cert-options-941979) DBG |   
	I0804 00:46:50.837395  377783 main.go:141] libmachine: (cert-options-941979) DBG | </network>
	I0804 00:46:50.837401  377783 main.go:141] libmachine: (cert-options-941979) DBG | 
	I0804 00:46:50.842992  377783 main.go:141] libmachine: (cert-options-941979) DBG | trying to create private KVM network mk-cert-options-941979 192.168.83.0/24...
	I0804 00:46:50.916161  377783 main.go:141] libmachine: (cert-options-941979) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 ...
	I0804 00:46:50.916184  377783 main.go:141] libmachine: (cert-options-941979) DBG | private KVM network mk-cert-options-941979 192.168.83.0/24 created
	I0804 00:46:50.916196  377783 main.go:141] libmachine: (cert-options-941979) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:46:50.916215  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.916096  377806 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:50.916327  377783 main.go:141] libmachine: (cert-options-941979) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:46:51.175432  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.175285  377806 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/id_rsa...
	I0804 00:46:51.318949  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.318786  377806 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/cert-options-941979.rawdisk...
	I0804 00:46:51.318975  377783 main.go:141] libmachine: (cert-options-941979) DBG | Writing magic tar header
	I0804 00:46:51.318993  377783 main.go:141] libmachine: (cert-options-941979) DBG | Writing SSH key tar header
	I0804 00:46:51.319003  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.318960  377806 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 ...
	I0804 00:46:51.319142  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979
	I0804 00:46:51.319156  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0804 00:46:51.319164  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 (perms=drwx------)
	I0804 00:46:51.319174  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:46:51.319179  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0804 00:46:51.319192  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0804 00:46:51.319202  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:46:51.319211  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:51.319224  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0804 00:46:51.319230  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:46:51.319235  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:46:51.319253  377783 main.go:141] libmachine: (cert-options-941979) Creating domain...
	I0804 00:46:51.319259  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:46:51.319263  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home
	I0804 00:46:51.319269  377783 main.go:141] libmachine: (cert-options-941979) DBG | Skipping /home - not owner
	I0804 00:46:51.320880  377783 main.go:141] libmachine: (cert-options-941979) define libvirt domain using xml: 
	I0804 00:46:51.320899  377783 main.go:141] libmachine: (cert-options-941979) <domain type='kvm'>
	I0804 00:46:51.320908  377783 main.go:141] libmachine: (cert-options-941979)   <name>cert-options-941979</name>
	I0804 00:46:51.320913  377783 main.go:141] libmachine: (cert-options-941979)   <memory unit='MiB'>2048</memory>
	I0804 00:46:51.320929  377783 main.go:141] libmachine: (cert-options-941979)   <vcpu>2</vcpu>
	I0804 00:46:51.320938  377783 main.go:141] libmachine: (cert-options-941979)   <features>
	I0804 00:46:51.320942  377783 main.go:141] libmachine: (cert-options-941979)     <acpi/>
	I0804 00:46:51.320946  377783 main.go:141] libmachine: (cert-options-941979)     <apic/>
	I0804 00:46:51.320950  377783 main.go:141] libmachine: (cert-options-941979)     <pae/>
	I0804 00:46:51.320956  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.320960  377783 main.go:141] libmachine: (cert-options-941979)   </features>
	I0804 00:46:51.320964  377783 main.go:141] libmachine: (cert-options-941979)   <cpu mode='host-passthrough'>
	I0804 00:46:51.320968  377783 main.go:141] libmachine: (cert-options-941979)   
	I0804 00:46:51.320971  377783 main.go:141] libmachine: (cert-options-941979)   </cpu>
	I0804 00:46:51.320975  377783 main.go:141] libmachine: (cert-options-941979)   <os>
	I0804 00:46:51.320979  377783 main.go:141] libmachine: (cert-options-941979)     <type>hvm</type>
	I0804 00:46:51.320983  377783 main.go:141] libmachine: (cert-options-941979)     <boot dev='cdrom'/>
	I0804 00:46:51.320986  377783 main.go:141] libmachine: (cert-options-941979)     <boot dev='hd'/>
	I0804 00:46:51.320991  377783 main.go:141] libmachine: (cert-options-941979)     <bootmenu enable='no'/>
	I0804 00:46:51.320994  377783 main.go:141] libmachine: (cert-options-941979)   </os>
	I0804 00:46:51.320998  377783 main.go:141] libmachine: (cert-options-941979)   <devices>
	I0804 00:46:51.321002  377783 main.go:141] libmachine: (cert-options-941979)     <disk type='file' device='cdrom'>
	I0804 00:46:51.321009  377783 main.go:141] libmachine: (cert-options-941979)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/boot2docker.iso'/>
	I0804 00:46:51.321013  377783 main.go:141] libmachine: (cert-options-941979)       <target dev='hdc' bus='scsi'/>
	I0804 00:46:51.321022  377783 main.go:141] libmachine: (cert-options-941979)       <readonly/>
	I0804 00:46:51.321024  377783 main.go:141] libmachine: (cert-options-941979)     </disk>
	I0804 00:46:51.321054  377783 main.go:141] libmachine: (cert-options-941979)     <disk type='file' device='disk'>
	I0804 00:46:51.321071  377783 main.go:141] libmachine: (cert-options-941979)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:46:51.321098  377783 main.go:141] libmachine: (cert-options-941979)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/cert-options-941979.rawdisk'/>
	I0804 00:46:51.321103  377783 main.go:141] libmachine: (cert-options-941979)       <target dev='hda' bus='virtio'/>
	I0804 00:46:51.321107  377783 main.go:141] libmachine: (cert-options-941979)     </disk>
	I0804 00:46:51.321111  377783 main.go:141] libmachine: (cert-options-941979)     <interface type='network'>
	I0804 00:46:51.321117  377783 main.go:141] libmachine: (cert-options-941979)       <source network='mk-cert-options-941979'/>
	I0804 00:46:51.321121  377783 main.go:141] libmachine: (cert-options-941979)       <model type='virtio'/>
	I0804 00:46:51.321127  377783 main.go:141] libmachine: (cert-options-941979)     </interface>
	I0804 00:46:51.321133  377783 main.go:141] libmachine: (cert-options-941979)     <interface type='network'>
	I0804 00:46:51.321140  377783 main.go:141] libmachine: (cert-options-941979)       <source network='default'/>
	I0804 00:46:51.321150  377783 main.go:141] libmachine: (cert-options-941979)       <model type='virtio'/>
	I0804 00:46:51.321158  377783 main.go:141] libmachine: (cert-options-941979)     </interface>
	I0804 00:46:51.321164  377783 main.go:141] libmachine: (cert-options-941979)     <serial type='pty'>
	I0804 00:46:51.321172  377783 main.go:141] libmachine: (cert-options-941979)       <target port='0'/>
	I0804 00:46:51.321177  377783 main.go:141] libmachine: (cert-options-941979)     </serial>
	I0804 00:46:51.321183  377783 main.go:141] libmachine: (cert-options-941979)     <console type='pty'>
	I0804 00:46:51.321187  377783 main.go:141] libmachine: (cert-options-941979)       <target type='serial' port='0'/>
	I0804 00:46:51.321191  377783 main.go:141] libmachine: (cert-options-941979)     </console>
	I0804 00:46:51.321197  377783 main.go:141] libmachine: (cert-options-941979)     <rng model='virtio'>
	I0804 00:46:51.321202  377783 main.go:141] libmachine: (cert-options-941979)       <backend model='random'>/dev/random</backend>
	I0804 00:46:51.321206  377783 main.go:141] libmachine: (cert-options-941979)     </rng>
	I0804 00:46:51.321212  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.321216  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.321223  377783 main.go:141] libmachine: (cert-options-941979)   </devices>
	I0804 00:46:51.321233  377783 main.go:141] libmachine: (cert-options-941979) </domain>
	I0804 00:46:51.321244  377783 main.go:141] libmachine: (cert-options-941979) 
	I0804 00:46:51.325639  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:cd:cc:89 in network default
	I0804 00:46:51.326223  377783 main.go:141] libmachine: (cert-options-941979) Ensuring networks are active...
	I0804 00:46:51.326235  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:51.326873  377783 main.go:141] libmachine: (cert-options-941979) Ensuring network default is active
	I0804 00:46:51.327237  377783 main.go:141] libmachine: (cert-options-941979) Ensuring network mk-cert-options-941979 is active
	I0804 00:46:51.327712  377783 main.go:141] libmachine: (cert-options-941979) Getting domain xml...
	I0804 00:46:51.328421  377783 main.go:141] libmachine: (cert-options-941979) Creating domain...
	I0804 00:46:52.590148  377783 main.go:141] libmachine: (cert-options-941979) Waiting to get IP...
	I0804 00:46:52.590934  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:52.591378  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:52.591425  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:52.591376  377806 retry.go:31] will retry after 284.890797ms: waiting for machine to come up
	I0804 00:46:52.878098  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:52.878590  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:52.878619  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:52.878564  377806 retry.go:31] will retry after 268.373072ms: waiting for machine to come up
	I0804 00:46:53.149174  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:53.149631  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:53.149663  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:53.149612  377806 retry.go:31] will retry after 437.861466ms: waiting for machine to come up
	I0804 00:46:53.589293  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:53.589930  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:53.589953  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:53.589872  377806 retry.go:31] will retry after 458.061449ms: waiting for machine to come up
	I0804 00:46:54.049267  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:54.049877  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:54.049895  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:54.049812  377806 retry.go:31] will retry after 588.850048ms: waiting for machine to come up
	I0804 00:46:54.640690  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:54.641249  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:54.641267  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:54.641179  377806 retry.go:31] will retry after 828.879886ms: waiting for machine to come up
	I0804 00:46:55.471843  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:55.472499  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:55.472523  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:55.472419  377806 retry.go:31] will retry after 924.844441ms: waiting for machine to come up
	I0804 00:46:53.610659  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:55.613647  377334 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:55.613678  377334 pod_ready.go:81] duration metric: took 8.510089589s for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:55.613693  377334 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.121529  377334 pod_ready.go:92] pod "etcd-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:57.121559  377334 pod_ready.go:81] duration metric: took 1.507857236s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.121571  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.126598  377334 pod_ready.go:92] pod "kube-apiserver-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:57.126623  377334 pod_ready.go:81] duration metric: took 5.044659ms for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.126632  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.133302  377334 pod_ready.go:92] pod "kube-controller-manager-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.133327  377334 pod_ready.go:81] duration metric: took 2.006688251s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.133336  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.138982  377334 pod_ready.go:92] pod "kube-proxy-lkxtd" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.139014  377334 pod_ready.go:81] duration metric: took 5.670224ms for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.139028  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.144509  377334 pod_ready.go:92] pod "kube-scheduler-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.144536  377334 pod_ready.go:81] duration metric: took 5.499312ms for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.144546  377334 pod_ready.go:38] duration metric: took 12.048065979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:46:59.144604  377334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:46:59.158318  377334 ops.go:34] apiserver oom_adj: -16
	I0804 00:46:59.158349  377334 kubeadm.go:597] duration metric: took 20.37116598s to restartPrimaryControlPlane
	I0804 00:46:59.158362  377334 kubeadm.go:394] duration metric: took 20.462947366s to StartCluster
	I0804 00:46:59.158389  377334 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:59.158477  377334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:46:59.159449  377334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:59.159731  377334 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:46:59.159861  377334 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:46:59.159986  377334 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:59.161287  377334 out.go:177] * Enabled addons: 
	I0804 00:46:59.161293  377334 out.go:177] * Verifying Kubernetes components...
	I0804 00:46:56.398791  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:56.399355  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:56.399375  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:56.399288  377806 retry.go:31] will retry after 1.389067548s: waiting for machine to come up
	I0804 00:46:57.790558  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:57.791043  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:57.791073  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:57.790993  377806 retry.go:31] will retry after 1.72029868s: waiting for machine to come up
	I0804 00:46:59.513078  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:59.513526  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:59.513542  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:59.513478  377806 retry.go:31] will retry after 1.791687997s: waiting for machine to come up
	I0804 00:46:59.162492  377334 addons.go:510] duration metric: took 2.640686ms for enable addons: enabled=[]
	I0804 00:46:59.162530  377334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:46:59.320398  377334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:46:59.337539  377334 node_ready.go:35] waiting up to 6m0s for node "pause-026475" to be "Ready" ...
	I0804 00:46:59.340742  377334 node_ready.go:49] node "pause-026475" has status "Ready":"True"
	I0804 00:46:59.340774  377334 node_ready.go:38] duration metric: took 3.184416ms for node "pause-026475" to be "Ready" ...
	I0804 00:46:59.340786  377334 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:46:59.345779  377334 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.607940  377334 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.607967  377334 pod_ready.go:81] duration metric: took 262.164826ms for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.607981  377334 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.008156  377334 pod_ready.go:92] pod "etcd-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.008196  377334 pod_ready.go:81] duration metric: took 400.199214ms for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.008211  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.408120  377334 pod_ready.go:92] pod "kube-apiserver-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.408168  377334 pod_ready.go:81] duration metric: took 399.934775ms for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.408183  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.808321  377334 pod_ready.go:92] pod "kube-controller-manager-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.808346  377334 pod_ready.go:81] duration metric: took 400.155433ms for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.808363  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.207887  377334 pod_ready.go:92] pod "kube-proxy-lkxtd" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:01.207922  377334 pod_ready.go:81] duration metric: took 399.551169ms for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.207934  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.608167  377334 pod_ready.go:92] pod "kube-scheduler-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:01.608192  377334 pod_ready.go:81] duration metric: took 400.250966ms for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.608200  377334 pod_ready.go:38] duration metric: took 2.26740327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:47:01.608233  377334 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:47:01.608314  377334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:47:01.626182  377334 api_server.go:72] duration metric: took 2.466404272s to wait for apiserver process to appear ...
	I0804 00:47:01.626215  377334 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:47:01.626243  377334 api_server.go:253] Checking apiserver healthz at https://192.168.61.154:8443/healthz ...
	I0804 00:47:01.630635  377334 api_server.go:279] https://192.168.61.154:8443/healthz returned 200:
	ok
	I0804 00:47:01.631727  377334 api_server.go:141] control plane version: v1.30.3
	I0804 00:47:01.631757  377334 api_server.go:131] duration metric: took 5.532976ms to wait for apiserver health ...
	I0804 00:47:01.631767  377334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:47:01.810288  377334 system_pods.go:59] 6 kube-system pods found
	I0804 00:47:01.810320  377334 system_pods.go:61] "coredns-7db6d8ff4d-sfzdw" [ef6bbea6-d3d9-4488-b8b7-c25d32490f03] Running
	I0804 00:47:01.810326  377334 system_pods.go:61] "etcd-pause-026475" [28537153-9571-4b3f-8adf-e77398e99ed4] Running
	I0804 00:47:01.810329  377334 system_pods.go:61] "kube-apiserver-pause-026475" [3c0c14e9-f3c1-4422-a61b-7d1680d92c55] Running
	I0804 00:47:01.810333  377334 system_pods.go:61] "kube-controller-manager-pause-026475" [5ea96e8f-e05e-42fd-9dd0-190ea692fd4c] Running
	I0804 00:47:01.810336  377334 system_pods.go:61] "kube-proxy-lkxtd" [20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7] Running
	I0804 00:47:01.810339  377334 system_pods.go:61] "kube-scheduler-pause-026475" [0ae126cb-34d9-4838-946a-41f6f6f33dba] Running
	I0804 00:47:01.810345  377334 system_pods.go:74] duration metric: took 178.571599ms to wait for pod list to return data ...
	I0804 00:47:01.810353  377334 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:47:02.007698  377334 default_sa.go:45] found service account: "default"
	I0804 00:47:02.007731  377334 default_sa.go:55] duration metric: took 197.371662ms for default service account to be created ...
	I0804 00:47:02.007741  377334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:47:02.210544  377334 system_pods.go:86] 6 kube-system pods found
	I0804 00:47:02.210574  377334 system_pods.go:89] "coredns-7db6d8ff4d-sfzdw" [ef6bbea6-d3d9-4488-b8b7-c25d32490f03] Running
	I0804 00:47:02.210580  377334 system_pods.go:89] "etcd-pause-026475" [28537153-9571-4b3f-8adf-e77398e99ed4] Running
	I0804 00:47:02.210585  377334 system_pods.go:89] "kube-apiserver-pause-026475" [3c0c14e9-f3c1-4422-a61b-7d1680d92c55] Running
	I0804 00:47:02.210588  377334 system_pods.go:89] "kube-controller-manager-pause-026475" [5ea96e8f-e05e-42fd-9dd0-190ea692fd4c] Running
	I0804 00:47:02.210592  377334 system_pods.go:89] "kube-proxy-lkxtd" [20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7] Running
	I0804 00:47:02.210596  377334 system_pods.go:89] "kube-scheduler-pause-026475" [0ae126cb-34d9-4838-946a-41f6f6f33dba] Running
	I0804 00:47:02.210601  377334 system_pods.go:126] duration metric: took 202.856251ms to wait for k8s-apps to be running ...
	I0804 00:47:02.210608  377334 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:47:02.210671  377334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:47:02.225618  377334 system_svc.go:56] duration metric: took 14.996816ms WaitForService to wait for kubelet
	I0804 00:47:02.225652  377334 kubeadm.go:582] duration metric: took 3.065882932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:47:02.225673  377334 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:47:02.408446  377334 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:47:02.408486  377334 node_conditions.go:123] node cpu capacity is 2
	I0804 00:47:02.408499  377334 node_conditions.go:105] duration metric: took 182.820714ms to run NodePressure ...
	I0804 00:47:02.408515  377334 start.go:241] waiting for startup goroutines ...
	I0804 00:47:02.408524  377334 start.go:246] waiting for cluster config update ...
	I0804 00:47:02.408535  377334 start.go:255] writing updated cluster config ...
	I0804 00:47:02.409374  377334 ssh_runner.go:195] Run: rm -f paused
	I0804 00:47:02.470162  377334 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:47:02.472103  377334 out.go:177] * Done! kubectl is now configured to use "pause-026475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.243020314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eebe6e53-120f-4073-90b3-20e4a4018e34 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.244411841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91d8c559-3e5c-45bc-9416-2719a05e241b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.245083406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732423245053668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91d8c559-3e5c-45bc-9416-2719a05e241b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.246675427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68b1d972-e713-43c3-a022-dd5586e5698a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.246756259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68b1d972-e713-43c3-a022-dd5586e5698a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.247106135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68b1d972-e713-43c3-a022-dd5586e5698a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.277137572Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=8c097504-4a32-43c1-8061-523f29f739f7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.277532868Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sfzdw,Uid:ef6bbea6-d3d9-4488-b8b7-c25d32490f03,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397603134502,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.123163815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-026475,Uid:9b33707aea7190551e20501409933eb2,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397599188380,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b33707aea7190551e20501409933eb2,kubernetes.io/config.seen: 2024-08-04T00:45:54.847407917Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&PodSandboxMetadata{Name:etcd-pause-026475,Uid:c5c0a9d05da944e4b81fb42bb2ad6c23,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397598542525,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.154:2379,kubernetes.io/config.hash: c5c0a9d05da944e4b81fb42bb2ad6c23,kubernetes.io/config.seen: 2024-08-04T00:45:54.847413374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-026475,Uid:110694c6253f95e9447c9b29460bc946,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397593597032,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 110694c6253f95e9447c9b29460bc946,kubernetes.io/config.seen: 2024-08-04T00:45:54.847412339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&PodSandboxMetadata{Name:kube-proxy-lkxtd,Uid:20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397508901296,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:07.854280020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-026475,Uid:ce1303851bb2317bd9914030617c11e7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397481115693,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd9914030617c11e7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.154:8443,kubernetes.io/config.hash: ce1303851bb2317bd9914030617c11e7,kubernetes.io/config.seen: 2024-08-04T00:45:54.847414458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-026475,Uid:9b33707aea7190551e20501409933eb2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394390197330,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,tier: control-plane,},Annotations:map[stri
ng]string{kubernetes.io/config.hash: 9b33707aea7190551e20501409933eb2,kubernetes.io/config.seen: 2024-08-04T00:45:54.847407917Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sfzdw,Uid:ef6bbea6-d3d9-4488-b8b7-c25d32490f03,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394382737615,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.123163815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-026475,Uid:110694c6253f95e9447c9b29460bc
946,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394370748246,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 110694c6253f95e9447c9b29460bc946,kubernetes.io/config.seen: 2024-08-04T00:45:54.847412339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-026475,Uid:ce1303851bb2317bd9914030617c11e7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394345675946,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce130385
1bb2317bd9914030617c11e7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.154:8443,kubernetes.io/config.hash: ce1303851bb2317bd9914030617c11e7,kubernetes.io/config.seen: 2024-08-04T00:45:54.847414458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&PodSandboxMetadata{Name:etcd-pause-026475,Uid:c5c0a9d05da944e4b81fb42bb2ad6c23,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394312838957,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.154:2379,kubernetes.io/config.hash: c5c0a9d05da944e4b81fb42bb2ad6c23,kubernetes.io/config.seen: 202
4-08-04T00:45:54.847413374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&PodSandboxMetadata{Name:kube-proxy-lkxtd,Uid:20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394297822903,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:07.854280020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b25fad84d1fe9a2dcab8177d324ad93f3634332c501963485432d369db890c9c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-tgqtq,Uid:3832d7fd-b315-462a-bfcf-dfd50ec24b62,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:17227323
69681587555,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-tgqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3832d7fd-b315-462a-bfcf-dfd50ec24b62,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.057041154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8c097504-4a32-43c1-8061-523f29f739f7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.278253793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d98a63be-e875-4cf8-ad67-5bbb0ad410f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.278316020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d98a63be-e875-4cf8-ad67-5bbb0ad410f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.278638087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d98a63be-e875-4cf8-ad67-5bbb0ad410f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.296586057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=532cddd8-8b93-43aa-b99b-ec9fb42cd792 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.296684221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=532cddd8-8b93-43aa-b99b-ec9fb42cd792 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.298122435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3fd2b53-82df-4e5c-82a8-2906b9991d3b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.298616781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732423298592791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3fd2b53-82df-4e5c-82a8-2906b9991d3b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.299152053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e94bd5b-a067-441d-8d66-d613aac9e67e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.299220208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e94bd5b-a067-441d-8d66-d613aac9e67e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.299517773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e94bd5b-a067-441d-8d66-d613aac9e67e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.341916450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b469a2d1-1aa4-4ccc-b99b-12ccd3783bd8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.342013520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b469a2d1-1aa4-4ccc-b99b-12ccd3783bd8 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.344282386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64646825-c2e4-4529-b3a2-a3023c8a3112 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.344739417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732423344717540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64646825-c2e4-4529-b3a2-a3023c8a3112 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.345193615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6123621-d323-45f6-bc8b-1422346e0e1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.345263972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6123621-d323-45f6-bc8b-1422346e0e1d name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:03 pause-026475 crio[2928]: time="2024-08-04 00:47:03.345563461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6123621-d323-45f6-bc8b-1422346e0e1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29cb4afd95a11       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago      Running             kube-proxy                2                   aef16e8560fb0       kube-proxy-lkxtd
	d396f27170fce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   5d9706be6c0b5       coredns-7db6d8ff4d-sfzdw
	0ad1505c92ab9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago      Running             kube-controller-manager   2                   dc76ee338ef21       kube-controller-manager-pause-026475
	4d5fd019a91be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Running             kube-apiserver            2                   f5dcfa6b22d6e       kube-apiserver-pause-026475
	0e02550fe523d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago      Running             kube-scheduler            2                   94e30356c6e55       kube-scheduler-pause-026475
	6690d21dcb307       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      2                   b23794878b9f9       etcd-pause-026475
	efb6ce46e8895       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   1                   93478432b62c9       coredns-7db6d8ff4d-sfzdw
	38fe22695b3d7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   28 seconds ago      Exited              kube-controller-manager   1                   e71208a6a980e       kube-controller-manager-pause-026475
	3e31423cf2d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   28 seconds ago      Exited              kube-proxy                1                   d075ad1d5af00       kube-proxy-lkxtd
	c2d7a87a29a77       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   28 seconds ago      Exited              kube-scheduler            1                   a9c5f34ecc694       kube-scheduler-pause-026475
	9117817a698be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   28 seconds ago      Exited              kube-apiserver            1                   2f7cc28777323       kube-apiserver-pause-026475
	8cb0cddfc6142       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago      Exited              etcd                      1                   aaae48a8da338       etcd-pause-026475
	
	
	==> coredns [d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35816 - 60265 "HINFO IN 5140896754027385394.490744798263124189. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014539237s
	
	
	==> coredns [efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9] <==
	
	
	==> describe nodes <==
	Name:               pause-026475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-026475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=pause-026475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_45_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-026475
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:46:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.154
	  Hostname:    pause-026475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d6b46d6b45f4a26ab7191fca0d4edce
	  System UUID:                1d6b46d6-b45f-4a26-ab71-91fca0d4edce
	  Boot ID:                    c8ce224f-8c0d-42fb-988e-5ef66ef9e165
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sfzdw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     55s
	  kube-system                 etcd-pause-026475                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kube-apiserver-pause-026475             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-pause-026475    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-lkxtd                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-scheduler-pause-026475             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     69s                kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeReady                67s                kubelet          Node pause-026475 status is now: NodeReady
	  Normal  RegisteredNode           56s                node-controller  Node pause-026475 event: Registered Node pause-026475 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node pause-026475 event: Registered Node pause-026475 in Controller
	
	
	==> dmesg <==
	[  +0.063539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084382] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.219467] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.140086] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.342298] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.832295] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.068316] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.875418] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.496847] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.586335] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.081072] kauditd_printk_skb: 41 callbacks suppressed
	[Aug 4 00:46] systemd-fstab-generator[1480]: Ignoring "noauto" option for root device
	[  +0.085938] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.087998] kauditd_printk_skb: 88 callbacks suppressed
	[ +13.963556] systemd-fstab-generator[2524]: Ignoring "noauto" option for root device
	[  +0.213733] systemd-fstab-generator[2592]: Ignoring "noauto" option for root device
	[  +0.455077] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +0.274425] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +0.528885] systemd-fstab-generator[2891]: Ignoring "noauto" option for root device
	[  +1.900161] systemd-fstab-generator[3487]: Ignoring "noauto" option for root device
	[  +2.690283] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[  +0.081740] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.544075] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.748271] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.099752] systemd-fstab-generator[4068]: Ignoring "noauto" option for root device
	
	
	==> etcd [6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5] <==
	{"level":"info","ts":"2024-08-04T00:46:41.831947Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","added-peer-id":"3cb84593c3b1392d","added-peer-peer-urls":["https://192.168.61.154:2380"]}
	{"level":"info","ts":"2024-08-04T00:46:41.832025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:41.832068Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:41.828534Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.845546Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.8456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.847311Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:46:41.850777Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cb84593c3b1392d","initial-advertise-peer-urls":["https://192.168.61.154:2380"],"listen-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:46:41.850852Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:46:41.8486Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:41.850919Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:43.662558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.662623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.662675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d received MsgPreVoteResp from 3cb84593c3b1392d at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.66269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d received MsgVoteResp from 3cb84593c3b1392d at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cb84593c3b1392d elected leader 3cb84593c3b1392d at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.665498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:46:43.665708Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:46:43.666119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:46:43.66617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:46:43.665496Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3cb84593c3b1392d","local-member-attributes":"{Name:pause-026475 ClientURLs:[https://192.168.61.154:2379]}","request-path":"/0/members/3cb84593c3b1392d/attributes","cluster-id":"240748f85504e22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:46:43.667864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.154:2379"}
	{"level":"info","ts":"2024-08-04T00:46:43.667916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90] <==
	{"level":"info","ts":"2024-08-04T00:46:35.645659Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"102.409032ms"}
	{"level":"info","ts":"2024-08-04T00:46:35.696986Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-04T00:46:35.728591Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","commit-index":442}
	{"level":"info","ts":"2024-08-04T00:46:35.73004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-04T00:46:35.730143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became follower at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:35.730184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3cb84593c3b1392d [peers: [], term: 2, commit: 442, applied: 0, lastindex: 442, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-04T00:46:35.966704Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-04T00:46:36.03286Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":427}
	{"level":"info","ts":"2024-08-04T00:46:36.053855Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-04T00:46:36.074997Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3cb84593c3b1392d","timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:46:36.075362Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3cb84593c3b1392d"}
	{"level":"info","ts":"2024-08-04T00:46:36.075414Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"3cb84593c3b1392d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-04T00:46:36.081822Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-04T00:46:36.084771Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.084982Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.085015Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.09834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d switched to configuration voters=(4375323538936117549)"}
	{"level":"info","ts":"2024-08-04T00:46:36.11012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","added-peer-id":"3cb84593c3b1392d","added-peer-peer-urls":["https://192.168.61.154:2380"]}
	{"level":"info","ts":"2024-08-04T00:46:36.138874Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:36.138972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:36.163777Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:46:36.165032Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:36.165057Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:36.165438Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cb84593c3b1392d","initial-advertise-peer-urls":["https://192.168.61.154:2380"],"listen-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:46:36.165547Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 00:47:03 up 1 min,  0 users,  load average: 1.31, 0.43, 0.15
	Linux pause-026475 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c] <==
	I0804 00:46:45.137647       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:46:45.138945       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:46:45.145723       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:46:45.146312       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:46:45.146348       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:46:45.146438       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:46:45.146788       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0804 00:46:45.154153       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 00:46:45.175945       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:46:45.176126       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:46:45.176231       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:46:45.176256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:46:45.176344       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:46:45.198084       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:46:45.203879       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:46:45.203983       1 policy_source.go:224] refreshing policies
	I0804 00:46:45.211116       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:46:46.048534       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:46:46.916308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:46:46.945804       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:46:47.010335       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:46:47.049301       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:46:47.064215       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:46:58.084967       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:46:58.128748       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd] <==
	I0804 00:46:35.749825       1 options.go:221] external host was not specified, using 192.168.61.154
	I0804 00:46:35.751036       1 server.go:148] Version: v1.30.3
	I0804 00:46:35.751935       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779] <==
	I0804 00:46:58.063856       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:46:58.068530       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:46:58.072869       1 shared_informer.go:320] Caches are synced for PV protection
	I0804 00:46:58.077544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0804 00:46:58.077758       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0804 00:46:58.077821       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0804 00:46:58.077859       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0804 00:46:58.079591       1 shared_informer.go:320] Caches are synced for TTL
	I0804 00:46:58.079682       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 00:46:58.079766       1 shared_informer.go:320] Caches are synced for cronjob
	I0804 00:46:58.085122       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:46:58.090111       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:46:58.093546       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 00:46:58.111136       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:46:58.152738       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:46:58.154177       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:46:58.155382       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:46:58.192665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.08161ms"
	I0804 00:46:58.194571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="215.286µs"
	I0804 00:46:58.247788       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:46:58.263628       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:46:58.272346       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:46:58.679170       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:46:58.679272       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:46:58.695868       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9] <==
	
	
	==> kube-proxy [29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89] <==
	I0804 00:46:46.563022       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:46:46.579223       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.154"]
	I0804 00:46:46.654619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:46:46.654705       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:46:46.654774       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:46:46.669551       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:46:46.671698       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:46:46.671737       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:46:46.673807       1 config.go:192] "Starting service config controller"
	I0804 00:46:46.673838       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:46:46.673859       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:46:46.673863       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:46:46.674838       1 config.go:319] "Starting node config controller"
	I0804 00:46:46.674864       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:46:46.774566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:46:46.774864       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:46:46.774926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671] <==
	
	
	==> kube-scheduler [0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3] <==
	I0804 00:46:42.363224       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:46:45.091728       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:46:45.091771       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:46:45.091835       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:46:45.091842       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:46:45.120111       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:46:45.122665       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:46:45.126319       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:46:45.126398       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:46:45.127200       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:46:45.127324       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:46:45.227363       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725] <==
	
	
	==> kubelet <==
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.178863    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b33707aea7190551e20501409933eb2-kubeconfig\") pod \"kube-controller-manager-pause-026475\" (UID: \"9b33707aea7190551e20501409933eb2\") " pod="kube-system/kube-controller-manager-pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.272023    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.272768    3629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.154:8443: connect: connection refused" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.406410    3629 scope.go:117] "RemoveContainer" containerID="9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.407642    3629 scope.go:117] "RemoveContainer" containerID="8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.437596    3629 scope.go:117] "RemoveContainer" containerID="c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.441220    3629 scope.go:117] "RemoveContainer" containerID="38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.574997    3629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-026475?timeout=10s\": dial tcp 192.168.61.154:8443: connect: connection refused" interval="800ms"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.673964    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.674996    3629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.154:8443: connect: connection refused" node="pause-026475"
	Aug 04 00:46:42 pause-026475 kubelet[3629]: I0804 00:46:42.476933    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.295147    3629 kubelet_node_status.go:112] "Node was previously registered" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.295252    3629 kubelet_node_status.go:76] "Successfully registered node" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.296693    3629 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.298224    3629 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: E0804 00:46:45.301852    3629 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-026475\" already exists" pod="kube-system/kube-apiserver-pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.948064    3629 apiserver.go:52] "Watching apiserver"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.951123    3629 topology_manager.go:215] "Topology Admit Handler" podUID="20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7" podNamespace="kube-system" podName="kube-proxy-lkxtd"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.951275    3629 topology_manager.go:215] "Topology Admit Handler" podUID="ef6bbea6-d3d9-4488-b8b7-c25d32490f03" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sfzdw"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.970215    3629 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.005497    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7-xtables-lock\") pod \"kube-proxy-lkxtd\" (UID: \"20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7\") " pod="kube-system/kube-proxy-lkxtd"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.005553    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7-lib-modules\") pod \"kube-proxy-lkxtd\" (UID: \"20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7\") " pod="kube-system/kube-proxy-lkxtd"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.252004    3629 scope.go:117] "RemoveContainer" containerID="efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.254967    3629 scope.go:117] "RemoveContainer" containerID="3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671"
	Aug 04 00:46:54 pause-026475 kubelet[3629]: I0804 00:46:54.987810    3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-026475 -n pause-026475
helpers_test.go:261: (dbg) Run:  kubectl --context pause-026475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-026475 -n pause-026475
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-026475 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-026475 logs -n 25: (1.42065651s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-055939       | kubernetes-upgrade-055939 | jenkins | v1.33.1 | 04 Aug 24 00:41 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-439963        | force-systemd-env-439963  | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| start   | -p stopped-upgrade-742754          | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:44 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-404249             | offline-crio-404249       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	| start   | -p running-upgrade-380850          | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:44 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:43 UTC |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:43 UTC | 04 Aug 24 00:44 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-742754 stop        | minikube                  | jenkins | v1.26.0 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:44 UTC |
	| start   | -p stopped-upgrade-742754          | stopped-upgrade-742754    | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-419151 sudo        | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| start   | -p running-upgrade-380850          | running-upgrade-380850    | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:44 UTC |
	| start   | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:44 UTC | 04 Aug 24 00:45 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-419151 sudo        | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-419151             | NoKubernetes-419151       | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p pause-026475 --memory=2048      | pause-026475              | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742754          | stopped-upgrade-742754    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p cert-expiration-443385          | cert-expiration-443385    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-380850          | running-upgrade-380850    | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:45 UTC |
	| start   | -p force-systemd-flag-040288       | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:45 UTC | 04 Aug 24 00:46 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-026475                    | pause-026475              | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:47 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-040288 ssh cat  | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:46 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-040288       | force-systemd-flag-040288 | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC | 04 Aug 24 00:46 UTC |
	| start   | -p cert-options-941979             | cert-options-941979       | jenkins | v1.33.1 | 04 Aug 24 00:46 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:46:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:46:50.730276  377783 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:46:50.730505  377783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:50.730508  377783 out.go:304] Setting ErrFile to fd 2...
	I0804 00:46:50.730512  377783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:46:50.730724  377783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:46:50.731344  377783 out.go:298] Setting JSON to false
	I0804 00:46:50.732286  377783 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":34159,"bootTime":1722698252,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:46:50.732346  377783 start.go:139] virtualization: kvm guest
	I0804 00:46:50.734456  377783 out.go:177] * [cert-options-941979] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:46:50.735778  377783 out.go:177]   - MINIKUBE_LOCATION=19370
	I0804 00:46:50.735816  377783 notify.go:220] Checking for updates...
	I0804 00:46:50.738436  377783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:46:50.739855  377783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:46:50.741174  377783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:50.742591  377783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:46:50.743932  377783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:46:50.745601  377783 config.go:182] Loaded profile config "cert-expiration-443385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:50.745708  377783 config.go:182] Loaded profile config "kubernetes-upgrade-055939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 00:46:50.745813  377783 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:50.745904  377783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:46:50.785017  377783 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:46:50.786423  377783 start.go:297] selected driver: kvm2
	I0804 00:46:50.786432  377783 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:46:50.786442  377783 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:46:50.787194  377783 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:50.787275  377783 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:46:50.803588  377783 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:46:50.803634  377783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:46:50.803962  377783 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:46:50.803986  377783 cni.go:84] Creating CNI manager for ""
	I0804 00:46:50.803993  377783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:46:50.804001  377783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:46:50.804075  377783 start.go:340] cluster config:
	{Name:cert-options-941979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-941979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0804 00:46:50.804173  377783 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:46:50.807090  377783 out.go:177] * Starting "cert-options-941979" primary control-plane node in "cert-options-941979" cluster
	I0804 00:46:50.808586  377783 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:46:50.808623  377783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:46:50.808631  377783 cache.go:56] Caching tarball of preloaded images
	I0804 00:46:50.808716  377783 preload.go:172] Found /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:46:50.808723  377783 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:46:50.808824  377783 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/cert-options-941979/config.json ...
	I0804 00:46:50.808837  377783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/cert-options-941979/config.json: {Name:mk8fa0a29f7dfb8a38d63405815015fc69a976db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:50.808965  377783 start.go:360] acquireMachinesLock for cert-options-941979: {Name:mk5ba202e04f74356b5f17ea45c4c5cbbb1b10e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:46:50.808998  377783 start.go:364] duration metric: took 15.511µs to acquireMachinesLock for "cert-options-941979"
	I0804 00:46:50.809013  377783 start.go:93] Provisioning new machine with config: &{Name:cert-options-941979 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-941979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:46:50.809084  377783 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:46:49.111016  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:51.112577  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:50.810762  377783 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0804 00:46:50.810941  377783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:46:50.810973  377783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:46:50.826271  377783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0804 00:46:50.826806  377783 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:46:50.827388  377783 main.go:141] libmachine: Using API Version  1
	I0804 00:46:50.827407  377783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:46:50.827878  377783 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:46:50.828080  377783 main.go:141] libmachine: (cert-options-941979) Calling .GetMachineName
	I0804 00:46:50.828243  377783 main.go:141] libmachine: (cert-options-941979) Calling .DriverName
	I0804 00:46:50.828384  377783 start.go:159] libmachine.API.Create for "cert-options-941979" (driver="kvm2")
	I0804 00:46:50.828410  377783 client.go:168] LocalClient.Create starting
	I0804 00:46:50.828445  377783 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/ca.pem
	I0804 00:46:50.828484  377783 main.go:141] libmachine: Decoding PEM data...
	I0804 00:46:50.828500  377783 main.go:141] libmachine: Parsing certificate...
	I0804 00:46:50.828562  377783 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19370-323890/.minikube/certs/cert.pem
	I0804 00:46:50.828578  377783 main.go:141] libmachine: Decoding PEM data...
	I0804 00:46:50.828594  377783 main.go:141] libmachine: Parsing certificate...
	I0804 00:46:50.828613  377783 main.go:141] libmachine: Running pre-create checks...
	I0804 00:46:50.828618  377783 main.go:141] libmachine: (cert-options-941979) Calling .PreCreateCheck
	I0804 00:46:50.828993  377783 main.go:141] libmachine: (cert-options-941979) Calling .GetConfigRaw
	I0804 00:46:50.829420  377783 main.go:141] libmachine: Creating machine...
	I0804 00:46:50.829427  377783 main.go:141] libmachine: (cert-options-941979) Calling .Create
	I0804 00:46:50.829570  377783 main.go:141] libmachine: (cert-options-941979) Creating KVM machine...
	I0804 00:46:50.830782  377783 main.go:141] libmachine: (cert-options-941979) DBG | found existing default KVM network
	I0804 00:46:50.833391  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.833205  377806 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0804 00:46:50.834425  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.834342  377806 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:36:97:76} reservation:<nil>}
	I0804 00:46:50.835290  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.835209  377806 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:9e:10} reservation:<nil>}
	I0804 00:46:50.836033  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.835955  377806 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:ce:2a:56} reservation:<nil>}
	I0804 00:46:50.837279  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.837193  377806 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003b0610}
	I0804 00:46:50.837326  377783 main.go:141] libmachine: (cert-options-941979) DBG | created network xml: 
	I0804 00:46:50.837337  377783 main.go:141] libmachine: (cert-options-941979) DBG | <network>
	I0804 00:46:50.837356  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <name>mk-cert-options-941979</name>
	I0804 00:46:50.837360  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <dns enable='no'/>
	I0804 00:46:50.837365  377783 main.go:141] libmachine: (cert-options-941979) DBG |   
	I0804 00:46:50.837371  377783 main.go:141] libmachine: (cert-options-941979) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0804 00:46:50.837375  377783 main.go:141] libmachine: (cert-options-941979) DBG |     <dhcp>
	I0804 00:46:50.837379  377783 main.go:141] libmachine: (cert-options-941979) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0804 00:46:50.837384  377783 main.go:141] libmachine: (cert-options-941979) DBG |     </dhcp>
	I0804 00:46:50.837388  377783 main.go:141] libmachine: (cert-options-941979) DBG |   </ip>
	I0804 00:46:50.837392  377783 main.go:141] libmachine: (cert-options-941979) DBG |   
	I0804 00:46:50.837395  377783 main.go:141] libmachine: (cert-options-941979) DBG | </network>
	I0804 00:46:50.837401  377783 main.go:141] libmachine: (cert-options-941979) DBG | 
	I0804 00:46:50.842992  377783 main.go:141] libmachine: (cert-options-941979) DBG | trying to create private KVM network mk-cert-options-941979 192.168.83.0/24...
	I0804 00:46:50.916161  377783 main.go:141] libmachine: (cert-options-941979) Setting up store path in /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 ...
	I0804 00:46:50.916184  377783 main.go:141] libmachine: (cert-options-941979) DBG | private KVM network mk-cert-options-941979 192.168.83.0/24 created
	I0804 00:46:50.916196  377783 main.go:141] libmachine: (cert-options-941979) Building disk image from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:46:50.916215  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:50.916096  377806 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:50.916327  377783 main.go:141] libmachine: (cert-options-941979) Downloading /home/jenkins/minikube-integration/19370-323890/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:46:51.175432  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.175285  377806 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/id_rsa...
	I0804 00:46:51.318949  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.318786  377806 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/cert-options-941979.rawdisk...
	I0804 00:46:51.318975  377783 main.go:141] libmachine: (cert-options-941979) DBG | Writing magic tar header
	I0804 00:46:51.318993  377783 main.go:141] libmachine: (cert-options-941979) DBG | Writing SSH key tar header
	I0804 00:46:51.319003  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:51.318960  377806 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 ...
	I0804 00:46:51.319142  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979
	I0804 00:46:51.319156  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube/machines
	I0804 00:46:51.319164  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979 (perms=drwx------)
	I0804 00:46:51.319174  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:46:51.319179  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890/.minikube (perms=drwxr-xr-x)
	I0804 00:46:51.319192  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration/19370-323890 (perms=drwxrwxr-x)
	I0804 00:46:51.319202  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:46:51.319211  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890/.minikube
	I0804 00:46:51.319224  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19370-323890
	I0804 00:46:51.319230  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:46:51.319235  377783 main.go:141] libmachine: (cert-options-941979) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:46:51.319253  377783 main.go:141] libmachine: (cert-options-941979) Creating domain...
	I0804 00:46:51.319259  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:46:51.319263  377783 main.go:141] libmachine: (cert-options-941979) DBG | Checking permissions on dir: /home
	I0804 00:46:51.319269  377783 main.go:141] libmachine: (cert-options-941979) DBG | Skipping /home - not owner
	I0804 00:46:51.320880  377783 main.go:141] libmachine: (cert-options-941979) define libvirt domain using xml: 
	I0804 00:46:51.320899  377783 main.go:141] libmachine: (cert-options-941979) <domain type='kvm'>
	I0804 00:46:51.320908  377783 main.go:141] libmachine: (cert-options-941979)   <name>cert-options-941979</name>
	I0804 00:46:51.320913  377783 main.go:141] libmachine: (cert-options-941979)   <memory unit='MiB'>2048</memory>
	I0804 00:46:51.320929  377783 main.go:141] libmachine: (cert-options-941979)   <vcpu>2</vcpu>
	I0804 00:46:51.320938  377783 main.go:141] libmachine: (cert-options-941979)   <features>
	I0804 00:46:51.320942  377783 main.go:141] libmachine: (cert-options-941979)     <acpi/>
	I0804 00:46:51.320946  377783 main.go:141] libmachine: (cert-options-941979)     <apic/>
	I0804 00:46:51.320950  377783 main.go:141] libmachine: (cert-options-941979)     <pae/>
	I0804 00:46:51.320956  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.320960  377783 main.go:141] libmachine: (cert-options-941979)   </features>
	I0804 00:46:51.320964  377783 main.go:141] libmachine: (cert-options-941979)   <cpu mode='host-passthrough'>
	I0804 00:46:51.320968  377783 main.go:141] libmachine: (cert-options-941979)   
	I0804 00:46:51.320971  377783 main.go:141] libmachine: (cert-options-941979)   </cpu>
	I0804 00:46:51.320975  377783 main.go:141] libmachine: (cert-options-941979)   <os>
	I0804 00:46:51.320979  377783 main.go:141] libmachine: (cert-options-941979)     <type>hvm</type>
	I0804 00:46:51.320983  377783 main.go:141] libmachine: (cert-options-941979)     <boot dev='cdrom'/>
	I0804 00:46:51.320986  377783 main.go:141] libmachine: (cert-options-941979)     <boot dev='hd'/>
	I0804 00:46:51.320991  377783 main.go:141] libmachine: (cert-options-941979)     <bootmenu enable='no'/>
	I0804 00:46:51.320994  377783 main.go:141] libmachine: (cert-options-941979)   </os>
	I0804 00:46:51.320998  377783 main.go:141] libmachine: (cert-options-941979)   <devices>
	I0804 00:46:51.321002  377783 main.go:141] libmachine: (cert-options-941979)     <disk type='file' device='cdrom'>
	I0804 00:46:51.321009  377783 main.go:141] libmachine: (cert-options-941979)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/boot2docker.iso'/>
	I0804 00:46:51.321013  377783 main.go:141] libmachine: (cert-options-941979)       <target dev='hdc' bus='scsi'/>
	I0804 00:46:51.321022  377783 main.go:141] libmachine: (cert-options-941979)       <readonly/>
	I0804 00:46:51.321024  377783 main.go:141] libmachine: (cert-options-941979)     </disk>
	I0804 00:46:51.321054  377783 main.go:141] libmachine: (cert-options-941979)     <disk type='file' device='disk'>
	I0804 00:46:51.321071  377783 main.go:141] libmachine: (cert-options-941979)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:46:51.321098  377783 main.go:141] libmachine: (cert-options-941979)       <source file='/home/jenkins/minikube-integration/19370-323890/.minikube/machines/cert-options-941979/cert-options-941979.rawdisk'/>
	I0804 00:46:51.321103  377783 main.go:141] libmachine: (cert-options-941979)       <target dev='hda' bus='virtio'/>
	I0804 00:46:51.321107  377783 main.go:141] libmachine: (cert-options-941979)     </disk>
	I0804 00:46:51.321111  377783 main.go:141] libmachine: (cert-options-941979)     <interface type='network'>
	I0804 00:46:51.321117  377783 main.go:141] libmachine: (cert-options-941979)       <source network='mk-cert-options-941979'/>
	I0804 00:46:51.321121  377783 main.go:141] libmachine: (cert-options-941979)       <model type='virtio'/>
	I0804 00:46:51.321127  377783 main.go:141] libmachine: (cert-options-941979)     </interface>
	I0804 00:46:51.321133  377783 main.go:141] libmachine: (cert-options-941979)     <interface type='network'>
	I0804 00:46:51.321140  377783 main.go:141] libmachine: (cert-options-941979)       <source network='default'/>
	I0804 00:46:51.321150  377783 main.go:141] libmachine: (cert-options-941979)       <model type='virtio'/>
	I0804 00:46:51.321158  377783 main.go:141] libmachine: (cert-options-941979)     </interface>
	I0804 00:46:51.321164  377783 main.go:141] libmachine: (cert-options-941979)     <serial type='pty'>
	I0804 00:46:51.321172  377783 main.go:141] libmachine: (cert-options-941979)       <target port='0'/>
	I0804 00:46:51.321177  377783 main.go:141] libmachine: (cert-options-941979)     </serial>
	I0804 00:46:51.321183  377783 main.go:141] libmachine: (cert-options-941979)     <console type='pty'>
	I0804 00:46:51.321187  377783 main.go:141] libmachine: (cert-options-941979)       <target type='serial' port='0'/>
	I0804 00:46:51.321191  377783 main.go:141] libmachine: (cert-options-941979)     </console>
	I0804 00:46:51.321197  377783 main.go:141] libmachine: (cert-options-941979)     <rng model='virtio'>
	I0804 00:46:51.321202  377783 main.go:141] libmachine: (cert-options-941979)       <backend model='random'>/dev/random</backend>
	I0804 00:46:51.321206  377783 main.go:141] libmachine: (cert-options-941979)     </rng>
	I0804 00:46:51.321212  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.321216  377783 main.go:141] libmachine: (cert-options-941979)     
	I0804 00:46:51.321223  377783 main.go:141] libmachine: (cert-options-941979)   </devices>
	I0804 00:46:51.321233  377783 main.go:141] libmachine: (cert-options-941979) </domain>
	I0804 00:46:51.321244  377783 main.go:141] libmachine: (cert-options-941979) 
	I0804 00:46:51.325639  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:cd:cc:89 in network default
	I0804 00:46:51.326223  377783 main.go:141] libmachine: (cert-options-941979) Ensuring networks are active...
	I0804 00:46:51.326235  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:51.326873  377783 main.go:141] libmachine: (cert-options-941979) Ensuring network default is active
	I0804 00:46:51.327237  377783 main.go:141] libmachine: (cert-options-941979) Ensuring network mk-cert-options-941979 is active
	I0804 00:46:51.327712  377783 main.go:141] libmachine: (cert-options-941979) Getting domain xml...
	I0804 00:46:51.328421  377783 main.go:141] libmachine: (cert-options-941979) Creating domain...
	I0804 00:46:52.590148  377783 main.go:141] libmachine: (cert-options-941979) Waiting to get IP...
	I0804 00:46:52.590934  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:52.591378  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:52.591425  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:52.591376  377806 retry.go:31] will retry after 284.890797ms: waiting for machine to come up
	I0804 00:46:52.878098  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:52.878590  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:52.878619  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:52.878564  377806 retry.go:31] will retry after 268.373072ms: waiting for machine to come up
	I0804 00:46:53.149174  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:53.149631  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:53.149663  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:53.149612  377806 retry.go:31] will retry after 437.861466ms: waiting for machine to come up
	I0804 00:46:53.589293  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:53.589930  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:53.589953  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:53.589872  377806 retry.go:31] will retry after 458.061449ms: waiting for machine to come up
	I0804 00:46:54.049267  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:54.049877  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:54.049895  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:54.049812  377806 retry.go:31] will retry after 588.850048ms: waiting for machine to come up
	I0804 00:46:54.640690  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:54.641249  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:54.641267  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:54.641179  377806 retry.go:31] will retry after 828.879886ms: waiting for machine to come up
	I0804 00:46:55.471843  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:55.472499  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:55.472523  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:55.472419  377806 retry.go:31] will retry after 924.844441ms: waiting for machine to come up
	I0804 00:46:53.610659  377334 pod_ready.go:102] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:55.613647  377334 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:55.613678  377334 pod_ready.go:81] duration metric: took 8.510089589s for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:55.613693  377334 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.121529  377334 pod_ready.go:92] pod "etcd-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:57.121559  377334 pod_ready.go:81] duration metric: took 1.507857236s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.121571  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.126598  377334 pod_ready.go:92] pod "kube-apiserver-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:57.126623  377334 pod_ready.go:81] duration metric: took 5.044659ms for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:57.126632  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.133302  377334 pod_ready.go:92] pod "kube-controller-manager-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.133327  377334 pod_ready.go:81] duration metric: took 2.006688251s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.133336  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.138982  377334 pod_ready.go:92] pod "kube-proxy-lkxtd" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.139014  377334 pod_ready.go:81] duration metric: took 5.670224ms for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.139028  377334 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.144509  377334 pod_ready.go:92] pod "kube-scheduler-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.144536  377334 pod_ready.go:81] duration metric: took 5.499312ms for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.144546  377334 pod_ready.go:38] duration metric: took 12.048065979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:46:59.144604  377334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:46:59.158318  377334 ops.go:34] apiserver oom_adj: -16
	I0804 00:46:59.158349  377334 kubeadm.go:597] duration metric: took 20.37116598s to restartPrimaryControlPlane
	I0804 00:46:59.158362  377334 kubeadm.go:394] duration metric: took 20.462947366s to StartCluster
	I0804 00:46:59.158389  377334 settings.go:142] acquiring lock: {Name:mk918fd72253bf33e8bae308fd36ed8b1c353763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:59.158477  377334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0804 00:46:59.159449  377334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/kubeconfig: {Name:mkd789cdd11c6330d283dbc76129ed198eb15398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:46:59.159731  377334 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:46:59.159861  377334 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 00:46:59.159986  377334 config.go:182] Loaded profile config "pause-026475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:46:59.161287  377334 out.go:177] * Enabled addons: 
	I0804 00:46:59.161293  377334 out.go:177] * Verifying Kubernetes components...
	I0804 00:46:56.398791  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:56.399355  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:56.399375  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:56.399288  377806 retry.go:31] will retry after 1.389067548s: waiting for machine to come up
	I0804 00:46:57.790558  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:57.791043  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:57.791073  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:57.790993  377806 retry.go:31] will retry after 1.72029868s: waiting for machine to come up
	I0804 00:46:59.513078  377783 main.go:141] libmachine: (cert-options-941979) DBG | domain cert-options-941979 has defined MAC address 52:54:00:e2:d8:9c in network mk-cert-options-941979
	I0804 00:46:59.513526  377783 main.go:141] libmachine: (cert-options-941979) DBG | unable to find current IP address of domain cert-options-941979 in network mk-cert-options-941979
	I0804 00:46:59.513542  377783 main.go:141] libmachine: (cert-options-941979) DBG | I0804 00:46:59.513478  377806 retry.go:31] will retry after 1.791687997s: waiting for machine to come up
	I0804 00:46:59.162492  377334 addons.go:510] duration metric: took 2.640686ms for enable addons: enabled=[]
	I0804 00:46:59.162530  377334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:46:59.320398  377334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:46:59.337539  377334 node_ready.go:35] waiting up to 6m0s for node "pause-026475" to be "Ready" ...
	I0804 00:46:59.340742  377334 node_ready.go:49] node "pause-026475" has status "Ready":"True"
	I0804 00:46:59.340774  377334 node_ready.go:38] duration metric: took 3.184416ms for node "pause-026475" to be "Ready" ...
	I0804 00:46:59.340786  377334 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:46:59.345779  377334 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.607940  377334 pod_ready.go:92] pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:59.607967  377334 pod_ready.go:81] duration metric: took 262.164826ms for pod "coredns-7db6d8ff4d-sfzdw" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:59.607981  377334 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.008156  377334 pod_ready.go:92] pod "etcd-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.008196  377334 pod_ready.go:81] duration metric: took 400.199214ms for pod "etcd-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.008211  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.408120  377334 pod_ready.go:92] pod "kube-apiserver-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.408168  377334 pod_ready.go:81] duration metric: took 399.934775ms for pod "kube-apiserver-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.408183  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.808321  377334 pod_ready.go:92] pod "kube-controller-manager-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:00.808346  377334 pod_ready.go:81] duration metric: took 400.155433ms for pod "kube-controller-manager-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:00.808363  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.207887  377334 pod_ready.go:92] pod "kube-proxy-lkxtd" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:01.207922  377334 pod_ready.go:81] duration metric: took 399.551169ms for pod "kube-proxy-lkxtd" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.207934  377334 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.608167  377334 pod_ready.go:92] pod "kube-scheduler-pause-026475" in "kube-system" namespace has status "Ready":"True"
	I0804 00:47:01.608192  377334 pod_ready.go:81] duration metric: took 400.250966ms for pod "kube-scheduler-pause-026475" in "kube-system" namespace to be "Ready" ...
	I0804 00:47:01.608200  377334 pod_ready.go:38] duration metric: took 2.26740327s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:47:01.608233  377334 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:47:01.608314  377334 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:47:01.626182  377334 api_server.go:72] duration metric: took 2.466404272s to wait for apiserver process to appear ...
	I0804 00:47:01.626215  377334 api_server.go:88] waiting for apiserver healthz status ...
	I0804 00:47:01.626243  377334 api_server.go:253] Checking apiserver healthz at https://192.168.61.154:8443/healthz ...
	I0804 00:47:01.630635  377334 api_server.go:279] https://192.168.61.154:8443/healthz returned 200:
	ok
	I0804 00:47:01.631727  377334 api_server.go:141] control plane version: v1.30.3
	I0804 00:47:01.631757  377334 api_server.go:131] duration metric: took 5.532976ms to wait for apiserver health ...
	I0804 00:47:01.631767  377334 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 00:47:01.810288  377334 system_pods.go:59] 6 kube-system pods found
	I0804 00:47:01.810320  377334 system_pods.go:61] "coredns-7db6d8ff4d-sfzdw" [ef6bbea6-d3d9-4488-b8b7-c25d32490f03] Running
	I0804 00:47:01.810326  377334 system_pods.go:61] "etcd-pause-026475" [28537153-9571-4b3f-8adf-e77398e99ed4] Running
	I0804 00:47:01.810329  377334 system_pods.go:61] "kube-apiserver-pause-026475" [3c0c14e9-f3c1-4422-a61b-7d1680d92c55] Running
	I0804 00:47:01.810333  377334 system_pods.go:61] "kube-controller-manager-pause-026475" [5ea96e8f-e05e-42fd-9dd0-190ea692fd4c] Running
	I0804 00:47:01.810336  377334 system_pods.go:61] "kube-proxy-lkxtd" [20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7] Running
	I0804 00:47:01.810339  377334 system_pods.go:61] "kube-scheduler-pause-026475" [0ae126cb-34d9-4838-946a-41f6f6f33dba] Running
	I0804 00:47:01.810345  377334 system_pods.go:74] duration metric: took 178.571599ms to wait for pod list to return data ...
	I0804 00:47:01.810353  377334 default_sa.go:34] waiting for default service account to be created ...
	I0804 00:47:02.007698  377334 default_sa.go:45] found service account: "default"
	I0804 00:47:02.007731  377334 default_sa.go:55] duration metric: took 197.371662ms for default service account to be created ...
	I0804 00:47:02.007741  377334 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 00:47:02.210544  377334 system_pods.go:86] 6 kube-system pods found
	I0804 00:47:02.210574  377334 system_pods.go:89] "coredns-7db6d8ff4d-sfzdw" [ef6bbea6-d3d9-4488-b8b7-c25d32490f03] Running
	I0804 00:47:02.210580  377334 system_pods.go:89] "etcd-pause-026475" [28537153-9571-4b3f-8adf-e77398e99ed4] Running
	I0804 00:47:02.210585  377334 system_pods.go:89] "kube-apiserver-pause-026475" [3c0c14e9-f3c1-4422-a61b-7d1680d92c55] Running
	I0804 00:47:02.210588  377334 system_pods.go:89] "kube-controller-manager-pause-026475" [5ea96e8f-e05e-42fd-9dd0-190ea692fd4c] Running
	I0804 00:47:02.210592  377334 system_pods.go:89] "kube-proxy-lkxtd" [20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7] Running
	I0804 00:47:02.210596  377334 system_pods.go:89] "kube-scheduler-pause-026475" [0ae126cb-34d9-4838-946a-41f6f6f33dba] Running
	I0804 00:47:02.210601  377334 system_pods.go:126] duration metric: took 202.856251ms to wait for k8s-apps to be running ...
	I0804 00:47:02.210608  377334 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 00:47:02.210671  377334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:47:02.225618  377334 system_svc.go:56] duration metric: took 14.996816ms WaitForService to wait for kubelet
	I0804 00:47:02.225652  377334 kubeadm.go:582] duration metric: took 3.065882932s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:47:02.225673  377334 node_conditions.go:102] verifying NodePressure condition ...
	I0804 00:47:02.408446  377334 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 00:47:02.408486  377334 node_conditions.go:123] node cpu capacity is 2
	I0804 00:47:02.408499  377334 node_conditions.go:105] duration metric: took 182.820714ms to run NodePressure ...
	I0804 00:47:02.408515  377334 start.go:241] waiting for startup goroutines ...
	I0804 00:47:02.408524  377334 start.go:246] waiting for cluster config update ...
	I0804 00:47:02.408535  377334 start.go:255] writing updated cluster config ...
	I0804 00:47:02.409374  377334 ssh_runner.go:195] Run: rm -f paused
	I0804 00:47:02.470162  377334 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 00:47:02.472103  377334 out.go:177] * Done! kubectl is now configured to use "pause-026475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.228904469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732425228879473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddb6f969-63fe-45e4-93bb-11be5c90d4ff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.229375880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33cb8b76-d6ef-4754-af81-15f9c9fccc3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.229529990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33cb8b76-d6ef-4754-af81-15f9c9fccc3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.229808336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33cb8b76-d6ef-4754-af81-15f9c9fccc3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.279641578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa526372-e5d9-4ede-a6db-8bfda69f8f45 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.279727691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa526372-e5d9-4ede-a6db-8bfda69f8f45 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.281323953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57ddbd83-0fcc-45d1-b99d-b8f63c08a8cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.281864638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732425281838909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57ddbd83-0fcc-45d1-b99d-b8f63c08a8cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.282761348Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2c2a3cdc-2e5a-4ccb-a7cc-2f7a9c1f5436 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.282899038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=979c1a5a-3ea1-4968-9cc7-29b98e6bb320 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.282958429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=979c1a5a-3ea1-4968-9cc7-29b98e6bb320 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.283073149Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sfzdw,Uid:ef6bbea6-d3d9-4488-b8b7-c25d32490f03,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397603134502,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.123163815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-026475,Uid:9b33707aea7190551e20501409933eb2,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397599188380,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9b33707aea7190551e20501409933eb2,kubernetes.io/config.seen: 2024-08-04T00:45:54.847407917Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&PodSandboxMetadata{Name:etcd-pause-026475,Uid:c5c0a9d05da944e4b81fb42bb2ad6c23,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397598542525,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.154:2379,kubernetes.io/config.hash: c5c0a9d05da944e4b81fb42bb2ad6c23,kubernetes.io/config.seen: 2024-08-04T00:45:54.847413374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-026475,Uid:110694c6253f95e9447c9b29460bc946,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397593597032,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 110694c6253f95e9447c9b29460bc946,kubernetes.io/config.seen: 2024-08-04T00:45:54.847412339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&PodSandboxMetadata{Name:kube-proxy-lkxtd,Uid:20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397508901296,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:07.854280020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-026475,Uid:ce1303851bb2317bd9914030617c11e7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722732397481115693,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd9914030617c11e7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.154:8443,kubernetes.io/config.hash: ce1303851bb2317bd9914030617c11e7,kubernetes.io/config.seen: 2024-08-04T00:45:54.847414458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-026475,Uid:9b33707aea7190551e20501409933eb2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394390197330,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,tier: control-plane,},Annotations:map[stri
ng]string{kubernetes.io/config.hash: 9b33707aea7190551e20501409933eb2,kubernetes.io/config.seen: 2024-08-04T00:45:54.847407917Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sfzdw,Uid:ef6bbea6-d3d9-4488-b8b7-c25d32490f03,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394382737615,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.123163815Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-026475,Uid:110694c6253f95e9447c9b29460bc
946,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394370748246,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 110694c6253f95e9447c9b29460bc946,kubernetes.io/config.seen: 2024-08-04T00:45:54.847412339Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-026475,Uid:ce1303851bb2317bd9914030617c11e7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394345675946,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce130385
1bb2317bd9914030617c11e7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.154:8443,kubernetes.io/config.hash: ce1303851bb2317bd9914030617c11e7,kubernetes.io/config.seen: 2024-08-04T00:45:54.847414458Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&PodSandboxMetadata{Name:etcd-pause-026475,Uid:c5c0a9d05da944e4b81fb42bb2ad6c23,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394312838957,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.154:2379,kubernetes.io/config.hash: c5c0a9d05da944e4b81fb42bb2ad6c23,kubernetes.io/config.seen: 202
4-08-04T00:45:54.847413374Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&PodSandboxMetadata{Name:kube-proxy-lkxtd,Uid:20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722732394297822903,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:07.854280020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b25fad84d1fe9a2dcab8177d324ad93f3634332c501963485432d369db890c9c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-tgqtq,Uid:3832d7fd-b315-462a-bfcf-dfd50ec24b62,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:17227323
69681587555,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-tgqtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3832d7fd-b315-462a-bfcf-dfd50ec24b62,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-04T00:46:09.057041154Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2c2a3cdc-2e5a-4ccb-a7cc-2f7a9c1f5436 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.283414299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=979c1a5a-3ea1-4968-9cc7-29b98e6bb320 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.286952326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ead71acd-fa46-4b41-a79a-28eac4e299a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.287138437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ead71acd-fa46-4b41-a79a-28eac4e299a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.288219182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ead71acd-fa46-4b41-a79a-28eac4e299a9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.345953965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd740a13-d182-4367-af54-d9f943a6c623 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.346047404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd740a13-d182-4367-af54-d9f943a6c623 name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.347635127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1df6866c-3724-451e-b642-f0af27ad2063 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.348096656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722732425348073000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1df6866c-3724-451e-b642-f0af27ad2063 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.348809379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f90fe57f-06e6-4c55-a1e4-fe6d336155e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.348865079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f90fe57f-06e6-4c55-a1e4-fe6d336155e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.349341037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89,PodSandboxId:aef16e8560fb0adb52aa9e09da112eb595cb9200ddf7e901a4662377cc1e1fc6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722732406303064901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f,PodSandboxId:5d9706be6c0b58b21f88eba58353c0635aa34321d5c8d60e7e77a200f8cbc1c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722732406279739643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff33f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3,PodSandboxId:94e30356c6e554305d2746aff96617528eb68e1586cf24dc0e8a211e5f6c7736,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722732401462124089,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 110694c625
3f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779,PodSandboxId:dc76ee338ef214fef02d39f76671c4515c6200f524125311182106850fb4c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722732401479521147,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c,PodSandboxId:f5dcfa6b22d6e5d231e5c379479f2f4d54d2924f4d5bcbdf625f755bce95a9eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722732401462446992,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce1303851bb2317bd99
14030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5,PodSandboxId:b23794878b9f94a18f1c23a8e946a88d8cf863675a3593f5c3e45f8ec6e0f9be,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722732401418112943,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.
kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9,PodSandboxId:93478432b62c95e163557fce0282485d1e42aa1c146975a364115f4379ef3183,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722732395981416782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sfzdw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef6bbea6-d3d9-4488-b8b7-c25d32490f03,},Annotations:map[string]string{io.kubernetes.container.hash: 41ff3
3f7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671,PodSandboxId:d075ad1d5af007bc39b7367d59b3a93fa87dabbcbb57e400bbf1a76d84a198c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722732395166987155,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-lkxtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c45caf5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9,PodSandboxId:e71208a6a980e9a756cdb9a26c013ac9ba6e9b761b7b33931296f6d503335a23,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722732395198377775,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: k
ube-controller-manager-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b33707aea7190551e20501409933eb2,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90,PodSandboxId:aaae48a8da338171b85ad9422318db07f01591be9283b28a9d4f937c63ee6329,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722732394766119627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-026475,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: c5c0a9d05da944e4b81fb42bb2ad6c23,},Annotations:map[string]string{io.kubernetes.container.hash: 9a6f7e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725,PodSandboxId:a9c5f34ecc694edbddce23d0ee8d709c93c110c2a0bfe8e545ece7a31ac825d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722732394920049574,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-026475,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 110694c6253f95e9447c9b29460bc946,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd,PodSandboxId:2f7cc28777323de4d38d4a5f62f5ad571179c6aa31c8688bc3cca770431a76f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722732394800062370,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-026475,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: ce1303851bb2317bd9914030617c11e7,},Annotations:map[string]string{io.kubernetes.container.hash: 579a403,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f90fe57f-06e6-4c55-a1e4-fe6d336155e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.349935384Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=b14e2e82-1575-4ffa-bd9a-10b5fbd3fcee name=/runtime.v1.RuntimeService/Version
	Aug 04 00:47:05 pause-026475 crio[2928]: time="2024-08-04 00:47:05.349995147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b14e2e82-1575-4ffa-bd9a-10b5fbd3fcee name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29cb4afd95a11       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                2                   aef16e8560fb0       kube-proxy-lkxtd
	d396f27170fce       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   5d9706be6c0b5       coredns-7db6d8ff4d-sfzdw
	0ad1505c92ab9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago      Running             kube-controller-manager   2                   dc76ee338ef21       kube-controller-manager-pause-026475
	4d5fd019a91be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago      Running             kube-apiserver            2                   f5dcfa6b22d6e       kube-apiserver-pause-026475
	0e02550fe523d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            2                   94e30356c6e55       kube-scheduler-pause-026475
	6690d21dcb307       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   b23794878b9f9       etcd-pause-026475
	efb6ce46e8895       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   93478432b62c9       coredns-7db6d8ff4d-sfzdw
	38fe22695b3d7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   30 seconds ago      Exited              kube-controller-manager   1                   e71208a6a980e       kube-controller-manager-pause-026475
	3e31423cf2d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   30 seconds ago      Exited              kube-proxy                1                   d075ad1d5af00       kube-proxy-lkxtd
	c2d7a87a29a77       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   30 seconds ago      Exited              kube-scheduler            1                   a9c5f34ecc694       kube-scheduler-pause-026475
	9117817a698be       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   30 seconds ago      Exited              kube-apiserver            1                   2f7cc28777323       kube-apiserver-pause-026475
	8cb0cddfc6142       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      1                   aaae48a8da338       etcd-pause-026475
	
	
	==> coredns [d396f27170fce51c9081e95885332d72fc175741760f2441cca949b434bf4f1f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35816 - 60265 "HINFO IN 5140896754027385394.490744798263124189. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014539237s
	
	
	==> coredns [efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9] <==
	
	
	==> describe nodes <==
	Name:               pause-026475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-026475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a458de372b9644e908f257fb7a6e1c2cc14d7bf
	                    minikube.k8s.io/name=pause-026475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T00_45_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 00:45:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-026475
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 00:47:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 00:46:45 +0000   Sun, 04 Aug 2024 00:45:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.154
	  Hostname:    pause-026475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d6b46d6b45f4a26ab7191fca0d4edce
	  System UUID:                1d6b46d6-b45f-4a26-ab71-91fca0d4edce
	  Boot ID:                    c8ce224f-8c0d-42fb-988e-5ef66ef9e165
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-sfzdw                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     57s
	  kube-system                 etcd-pause-026475                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         70s
	  kube-system                 kube-apiserver-pause-026475             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-pause-026475    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-proxy-lkxtd                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-026475             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     71s                kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeReady                69s                kubelet          Node pause-026475 status is now: NodeReady
	  Normal  RegisteredNode           58s                node-controller  Node pause-026475 event: Registered Node pause-026475 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-026475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-026475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-026475 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-026475 event: Registered Node pause-026475 in Controller
	
	
	==> dmesg <==
	[  +0.063539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084382] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.219467] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.140086] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.342298] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.832295] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +0.068316] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.875418] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.496847] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.586335] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.081072] kauditd_printk_skb: 41 callbacks suppressed
	[Aug 4 00:46] systemd-fstab-generator[1480]: Ignoring "noauto" option for root device
	[  +0.085938] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.087998] kauditd_printk_skb: 88 callbacks suppressed
	[ +13.963556] systemd-fstab-generator[2524]: Ignoring "noauto" option for root device
	[  +0.213733] systemd-fstab-generator[2592]: Ignoring "noauto" option for root device
	[  +0.455077] systemd-fstab-generator[2742]: Ignoring "noauto" option for root device
	[  +0.274425] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +0.528885] systemd-fstab-generator[2891]: Ignoring "noauto" option for root device
	[  +1.900161] systemd-fstab-generator[3487]: Ignoring "noauto" option for root device
	[  +2.690283] systemd-fstab-generator[3622]: Ignoring "noauto" option for root device
	[  +0.081740] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.544075] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.748271] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.099752] systemd-fstab-generator[4068]: Ignoring "noauto" option for root device
	
	
	==> etcd [6690d21dcb307e178d32909905b309d032e33cfacad4296727fbaf047e9f27d5] <==
	{"level":"info","ts":"2024-08-04T00:46:41.831947Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","added-peer-id":"3cb84593c3b1392d","added-peer-peer-urls":["https://192.168.61.154:2380"]}
	{"level":"info","ts":"2024-08-04T00:46:41.832025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:41.832068Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:41.828534Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.845546Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.8456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:41.847311Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:46:41.850777Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cb84593c3b1392d","initial-advertise-peer-urls":["https://192.168.61.154:2380"],"listen-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:46:41.850852Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T00:46:41.8486Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:41.850919Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:43.662558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.662623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.662675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d received MsgPreVoteResp from 3cb84593c3b1392d at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:43.66269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d received MsgVoteResp from 3cb84593c3b1392d at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became leader at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.662711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3cb84593c3b1392d elected leader 3cb84593c3b1392d at term 3"}
	{"level":"info","ts":"2024-08-04T00:46:43.665498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:46:43.665708Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T00:46:43.666119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T00:46:43.66617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T00:46:43.665496Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3cb84593c3b1392d","local-member-attributes":"{Name:pause-026475 ClientURLs:[https://192.168.61.154:2379]}","request-path":"/0/members/3cb84593c3b1392d/attributes","cluster-id":"240748f85504e22c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:46:43.667864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.154:2379"}
	{"level":"info","ts":"2024-08-04T00:46:43.667916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90] <==
	{"level":"info","ts":"2024-08-04T00:46:35.645659Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"102.409032ms"}
	{"level":"info","ts":"2024-08-04T00:46:35.696986Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-04T00:46:35.728591Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","commit-index":442}
	{"level":"info","ts":"2024-08-04T00:46:35.73004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-04T00:46:35.730143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d became follower at term 2"}
	{"level":"info","ts":"2024-08-04T00:46:35.730184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3cb84593c3b1392d [peers: [], term: 2, commit: 442, applied: 0, lastindex: 442, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-04T00:46:35.966704Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-04T00:46:36.03286Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":427}
	{"level":"info","ts":"2024-08-04T00:46:36.053855Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-04T00:46:36.074997Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3cb84593c3b1392d","timeout":"7s"}
	{"level":"info","ts":"2024-08-04T00:46:36.075362Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3cb84593c3b1392d"}
	{"level":"info","ts":"2024-08-04T00:46:36.075414Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"3cb84593c3b1392d","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-04T00:46:36.081822Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-04T00:46:36.084771Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.084982Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.085015Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T00:46:36.09834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3cb84593c3b1392d switched to configuration voters=(4375323538936117549)"}
	{"level":"info","ts":"2024-08-04T00:46:36.11012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","added-peer-id":"3cb84593c3b1392d","added-peer-peer-urls":["https://192.168.61.154:2380"]}
	{"level":"info","ts":"2024-08-04T00:46:36.138874Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"240748f85504e22c","local-member-id":"3cb84593c3b1392d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:36.138972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T00:46:36.163777Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T00:46:36.165032Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:36.165057Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.154:2380"}
	{"level":"info","ts":"2024-08-04T00:46:36.165438Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3cb84593c3b1392d","initial-advertise-peer-urls":["https://192.168.61.154:2380"],"listen-peer-urls":["https://192.168.61.154:2380"],"advertise-client-urls":["https://192.168.61.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T00:46:36.165547Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 00:47:05 up 1 min,  0 users,  load average: 1.31, 0.43, 0.15
	Linux pause-026475 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4d5fd019a91bee61318e895993900799e9278eaa0d9651f0eeeb4e328fa1db8c] <==
	I0804 00:46:45.137647       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 00:46:45.138945       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 00:46:45.145723       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 00:46:45.146312       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 00:46:45.146348       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 00:46:45.146438       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 00:46:45.146788       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0804 00:46:45.154153       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 00:46:45.175945       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 00:46:45.176126       1 aggregator.go:165] initial CRD sync complete...
	I0804 00:46:45.176231       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 00:46:45.176256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 00:46:45.176344       1 cache.go:39] Caches are synced for autoregister controller
	I0804 00:46:45.198084       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 00:46:45.203879       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 00:46:45.203983       1 policy_source.go:224] refreshing policies
	I0804 00:46:45.211116       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 00:46:46.048534       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 00:46:46.916308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 00:46:46.945804       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 00:46:47.010335       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 00:46:47.049301       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 00:46:47.064215       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 00:46:58.084967       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 00:46:58.128748       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd] <==
	I0804 00:46:35.749825       1 options.go:221] external host was not specified, using 192.168.61.154
	I0804 00:46:35.751036       1 server.go:148] Version: v1.30.3
	I0804 00:46:35.751935       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0ad1505c92ab9837d06f09750bf777cb8229121150593e38610ac97cd508a779] <==
	I0804 00:46:58.063856       1 shared_informer.go:320] Caches are synced for persistent volume
	I0804 00:46:58.068530       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 00:46:58.072869       1 shared_informer.go:320] Caches are synced for PV protection
	I0804 00:46:58.077544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0804 00:46:58.077758       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0804 00:46:58.077821       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0804 00:46:58.077859       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0804 00:46:58.079591       1 shared_informer.go:320] Caches are synced for TTL
	I0804 00:46:58.079682       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 00:46:58.079766       1 shared_informer.go:320] Caches are synced for cronjob
	I0804 00:46:58.085122       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 00:46:58.090111       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 00:46:58.093546       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 00:46:58.111136       1 shared_informer.go:320] Caches are synced for GC
	I0804 00:46:58.152738       1 shared_informer.go:320] Caches are synced for disruption
	I0804 00:46:58.154177       1 shared_informer.go:320] Caches are synced for deployment
	I0804 00:46:58.155382       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 00:46:58.192665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.08161ms"
	I0804 00:46:58.194571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="215.286µs"
	I0804 00:46:58.247788       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:46:58.263628       1 shared_informer.go:320] Caches are synced for HPA
	I0804 00:46:58.272346       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 00:46:58.679170       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 00:46:58.679272       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 00:46:58.695868       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9] <==
	
	
	==> kube-proxy [29cb4afd95a11c44e12a4a119287c687645296d0676e9c060a2798004eaf8e89] <==
	I0804 00:46:46.563022       1 server_linux.go:69] "Using iptables proxy"
	I0804 00:46:46.579223       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.154"]
	I0804 00:46:46.654619       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 00:46:46.654705       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 00:46:46.654774       1 server_linux.go:165] "Using iptables Proxier"
	I0804 00:46:46.669551       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 00:46:46.671698       1 server.go:872] "Version info" version="v1.30.3"
	I0804 00:46:46.671737       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:46:46.673807       1 config.go:192] "Starting service config controller"
	I0804 00:46:46.673838       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 00:46:46.673859       1 config.go:101] "Starting endpoint slice config controller"
	I0804 00:46:46.673863       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 00:46:46.674838       1 config.go:319] "Starting node config controller"
	I0804 00:46:46.674864       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 00:46:46.774566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 00:46:46.774864       1 shared_informer.go:320] Caches are synced for service config
	I0804 00:46:46.774926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671] <==
	
	
	==> kube-scheduler [0e02550fe523d8a7081bfa2105a1ed184518cc2a5005e438c306cf6cbb8820d3] <==
	I0804 00:46:42.363224       1 serving.go:380] Generated self-signed cert in-memory
	W0804 00:46:45.091728       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 00:46:45.091771       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 00:46:45.091835       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 00:46:45.091842       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 00:46:45.120111       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 00:46:45.122665       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 00:46:45.126319       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 00:46:45.126398       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 00:46:45.127200       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 00:46:45.127324       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 00:46:45.227363       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725] <==
	
	
	==> kubelet <==
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.178863    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b33707aea7190551e20501409933eb2-kubeconfig\") pod \"kube-controller-manager-pause-026475\" (UID: \"9b33707aea7190551e20501409933eb2\") " pod="kube-system/kube-controller-manager-pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.272023    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.272768    3629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.154:8443: connect: connection refused" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.406410    3629 scope.go:117] "RemoveContainer" containerID="9117817a698be42c567cc1905d9fcba9f741a14ba7f647f3c0e11b5f0c9655dd"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.407642    3629 scope.go:117] "RemoveContainer" containerID="8cb0cddfc614289f334e1071a4c91b572b2881e7ba622859554ed56d9016ec90"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.437596    3629 scope.go:117] "RemoveContainer" containerID="c2d7a87a29a774a1a28b639938b2807ec94f345eb749deed3b7921825029a725"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.441220    3629 scope.go:117] "RemoveContainer" containerID="38fe22695b3d7f7b4aef09eb1f0088672dcd26f54155c4850acf10dd8f2f78e9"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.574997    3629 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-026475?timeout=10s\": dial tcp 192.168.61.154:8443: connect: connection refused" interval="800ms"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: I0804 00:46:41.673964    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:41 pause-026475 kubelet[3629]: E0804 00:46:41.674996    3629 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.154:8443: connect: connection refused" node="pause-026475"
	Aug 04 00:46:42 pause-026475 kubelet[3629]: I0804 00:46:42.476933    3629 kubelet_node_status.go:73] "Attempting to register node" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.295147    3629 kubelet_node_status.go:112] "Node was previously registered" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.295252    3629 kubelet_node_status.go:76] "Successfully registered node" node="pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.296693    3629 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.298224    3629 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: E0804 00:46:45.301852    3629 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-026475\" already exists" pod="kube-system/kube-apiserver-pause-026475"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.948064    3629 apiserver.go:52] "Watching apiserver"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.951123    3629 topology_manager.go:215] "Topology Admit Handler" podUID="20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7" podNamespace="kube-system" podName="kube-proxy-lkxtd"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.951275    3629 topology_manager.go:215] "Topology Admit Handler" podUID="ef6bbea6-d3d9-4488-b8b7-c25d32490f03" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sfzdw"
	Aug 04 00:46:45 pause-026475 kubelet[3629]: I0804 00:46:45.970215    3629 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.005497    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7-xtables-lock\") pod \"kube-proxy-lkxtd\" (UID: \"20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7\") " pod="kube-system/kube-proxy-lkxtd"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.005553    3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7-lib-modules\") pod \"kube-proxy-lkxtd\" (UID: \"20bd83a7-0b6b-45d5-8cbe-716d2cb5eff7\") " pod="kube-system/kube-proxy-lkxtd"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.252004    3629 scope.go:117] "RemoveContainer" containerID="efb6ce46e8895b1a0f30b3fb03f711978c4990af1ea8bac23759253734666ac9"
	Aug 04 00:46:46 pause-026475 kubelet[3629]: I0804 00:46:46.254967    3629 scope.go:117] "RemoveContainer" containerID="3e31423cf2d48451c286ac076cacfce3931c1919899e9810d78082f2be415671"
	Aug 04 00:46:54 pause-026475 kubelet[3629]: I0804 00:46:54.987810    3629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-026475 -n pause-026475
helpers_test.go:261: (dbg) Run:  kubectl --context pause-026475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (54.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7200.062s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-545482 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0804 00:57:48.632349  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/flannel-675149/client.crt: no such file or directory
E0804 00:57:56.242404  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/enable-default-cni-675149/client.crt: no such file or directory
E0804 00:57:57.868665  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/client.crt: no such file or directory
E0804 00:58:17.005895  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/auto-675149/client.crt: no such file or directory
E0804 00:58:22.367328  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/bridge-675149/client.crt: no such file or directory
E0804 00:58:32.617150  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/custom-flannel-675149/client.crt: no such file or directory
E0804 00:58:44.689344  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/auto-675149/client.crt: no such file or directory
E0804 00:58:47.466890  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 00:59:10.552697  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/flannel-675149/client.crt: no such file or directory
E0804 00:59:16.164018  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kindnet-675149/client.crt: no such file or directory
E0804 00:59:18.162697  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/enable-default-cni-675149/client.crt: no such file or directory
E0804 00:59:43.849616  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/kindnet-675149/client.crt: no such file or directory
E0804 00:59:44.288509  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/bridge-675149/client.crt: no such file or directory
E0804 01:00:14.027712  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/client.crt: no such file or directory
E0804 01:00:41.709367  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/calico-675149/client.crt: no such file or directory
E0804 01:00:48.773379  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/custom-flannel-675149/client.crt: no such file or directory
E0804 01:01:16.458279  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/custom-flannel-675149/client.crt: no such file or directory
E0804 01:01:26.708687  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/flannel-675149/client.crt: no such file or directory
E0804 01:01:34.318449  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/enable-default-cni-675149/client.crt: no such file or directory
E0804 01:01:54.393927  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/flannel-675149/client.crt: no such file or directory
E0804 01:02:00.443123  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/bridge-675149/client.crt: no such file or directory
E0804 01:02:02.003754  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/enable-default-cni-675149/client.crt: no such file or directory
E0804 01:02:24.416918  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 01:02:28.128931  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/bridge-675149/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (21m24s)
	TestNetworkPlugins/group (10m26s)
	TestStartStop (17m39s)
	TestStartStop/group/default-k8s-diff-port (10m26s)
	TestStartStop/group/default-k8s-diff-port/serial (10m26s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5m42s)
	TestStartStop/group/embed-certs (10m47s)
	TestStartStop/group/embed-certs/serial (10m47s)
	TestStartStop/group/embed-certs/serial/SecondStart (6m25s)
	TestStartStop/group/no-preload (10m48s)
	TestStartStop/group/no-preload/serial (10m48s)
	TestStartStop/group/no-preload/serial/SecondStart (6m16s)
	TestStartStop/group/old-k8s-version (11m34s)
	TestStartStop/group/old-k8s-version/serial (11m34s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (5m8s)

                                                
                                                
goroutine 3400 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000432d00, 0xc000795bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0000124f8, {0x49d5100, 0x2b, 0x2b}, {0x26b6035?, 0xc0009edb00?, 0x4a91a40?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00072ebe0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00072ebe0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000690e80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2140 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009dd040, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 514 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00143ed80, 0xc00010f7a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 417
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3241 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0015057a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3295
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3019 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017ed6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3052
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 13 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 12
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 3389 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613e110, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0009ff7a0?, 0xc00154486b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0009ff7a0, {0xc00154486b, 0x7795, 0x7795})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00012cc70, {0xc00154486b?, 0x21a3760?, 0xfe0f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014d8810, {0x3698960, 0xc001328488})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc0014d8810}, {0x3698960, 0xc001328488}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00012cc70?, {0x3698aa0, 0xc0014d8810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00012cc70, {0x3698aa0, 0xc0014d8810})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc0014d8810}, {0x36989c0, 0xc00012cc70}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00245e700?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3387
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3256 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3255
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3356 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613e9c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017edda0?, 0xc00131f0db?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017edda0, {0xc00131f0db, 0xf25, 0xf25})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0016ee338, {0xc00131f0db?, 0xc00172d530?, 0xfe8c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ee8e10, {0x3698960, 0xc0013282d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc001ee8e10}, {0x3698960, 0xc0013282d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0016ee338?, {0x3698aa0, 0xc001ee8e10})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0016ee338, {0x3698aa0, 0xc001ee8e10})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc001ee8e10}, {0x36989c0, 0xc0016ee338}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0007ae780?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3354
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3057 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00095b310, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017ed560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00095b340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009d2ca0, {0x3699ec0, 0xc001ee8b10}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009d2ca0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3020
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3070 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0000ded00, {0x2668a7e?, 0x60400000004?}, 0xc001a2c000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0000ded00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0000ded00, 0xc001a2c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2079
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3065 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0000de9c0, {0x2668a7e?, 0x60400000004?}, 0xc001a2c080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0000de9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0000de9c0, 0xc001a2c200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2077
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 861 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc0016e1d40)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 858
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 350 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001374750, 0xc001446f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x11?, 0xc001374750, 0xc001374798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0xc0007aaea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013747d0?, 0x592e44?, 0xc0002315c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3355 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f404613e5e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017edce0?, 0xc001da2aa4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017edce0, {0xc001da2aa4, 0x55c, 0x55c})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0016ee2f0, {0xc001da2aa4?, 0x5383e0?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ee8de0, {0x3698960, 0xc0005127a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc001ee8de0}, {0x3698960, 0xc0005127a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0016ee2f0?, {0x3698aa0, 0xc001ee8de0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0016ee2f0, {0x3698aa0, 0xc001ee8de0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc001ee8de0}, {0x36989c0, 0xc0016ee2f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001a2c080?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3354
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 696 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000225e00, 0xc000061c80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 356
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2367 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001666510, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b31c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001666540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008fe6a0, {0x3699ec0, 0xc0009c6510}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008fe6a0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2458
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3269 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001375f50, 0xc001375f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0xe0?, 0xc001375f50, 0xc001375f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0xc001d0c9c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001375fd0?, 0x592e44?, 0xc0007f56e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3242
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2356 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3387 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x60520, 0xc001715ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001d367b0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001d367b0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018fea80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0018fea80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000de680, 0xc0018fea80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bdd80, 0xc00063c2a0}, 0xc0000de680, {0xc002129368, 0x16}, {0x0?, 0xc00172cf60?}, {0x551133?, 0x4a170f?}, {0xc0017e4000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000de680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000de680, 0xc00245e700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2841
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2891 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009ddc90, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019097a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009ddcc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008febc0, {0x3699ec0, 0xc001a65530}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008febc0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1678 [chan receive, 21 minutes]:
testing.(*T).Run(0xc001d0c340, {0x265b689?, 0x55127c?}, 0xc00196c180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001d0c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001d0c340, 0x313e5a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 351 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 350
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 173 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f404613ecb0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001ac680)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001ac680)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000bfe5e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000bfe5e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0001e40f0, {0x36b0da0, 0xc000bfe5e0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0001e40f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00171c1a0?, 0xc00171c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 170
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 387 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009dd480, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 363
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2536 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b2e5c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2534
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2888 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009ddcc0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3255 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001732750, 0xc001732798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x0?, 0xc001732750, 0xc001732798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017327d0?, 0x592e44?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3215
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2850 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001666d80, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2848
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3020 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00095b340, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3052
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 603 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001cb2a80, 0xc001651d40)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 602
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 386 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000030420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 363
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2892 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc00151af50, 0xc00151af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x40?, 0xc00151af50, 0xc00151af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x2020202020202020?, 0x2020202020202020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00186c480?, 0xc0017fe540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2888
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2368 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc00083b750, 0xc00083b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0xa0?, 0xc00083b750, 0xc00083b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x99b656?, 0xc0013cca80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00083b7d0?, 0x592e44?, 0xc0007f51a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2458
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 349 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0009dd450, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001505ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009dd480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001519e40, {0x3699ec0, 0xc0000cdad0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001519e40, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3242 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b2e640, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3295
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3350 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x60248, 0xc001714ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001e32930)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001e32930)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0015e2300)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0015e2300)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00171d860, 0xc0015e2300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bdd80, 0xc00017e380}, 0xc00171d860, {0xc0018a4858, 0x12}, {0x0?, 0xc001376760?}, {0x551133?, 0x4a170f?}, {0xc001454200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00171d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00171d860, 0xc001a2c000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3070
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2458 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001666540, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1764 [chan receive, 10 minutes]:
testing.(*testContext).waitParallel(0xc000614d20)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc00171c000, 0xc00196c180)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1678
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2893 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2892
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3058 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc00083a750, 0xc00135ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x20?, 0xc00083a750, 0xc00083a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x99b656?, 0xc000224900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00083a7d0?, 0x592e44?, 0xc0007f4d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3020
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2855 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2854
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3268 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b2e610, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001505680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b2e640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00256a000, {0x3699ec0, 0xc000798090}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00256a000, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3242
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3352 [IO wait]:
internal/poll.runtime_pollWait(0x7f40444263e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017ed860?, 0xc001608573?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017ed860, {0xc001608573, 0x1ba8d, 0x1ba8d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0016ee268, {0xc001608573?, 0x702c736e642d6562?, 0x1fe4f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ee8a20, {0x3698960, 0xc0013281d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc001ee8a20}, {0x3698960, 0xc0013281d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0016ee268?, {0x3698aa0, 0xc001ee8a20})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0016ee268, {0x3698aa0, 0xc001ee8a20})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc001ee8a20}, {0x36989c0, 0xc0016ee268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x72656e65672d6574?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3350
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3354 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x602b2, 0xc00136dab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001e32ab0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001e32ab0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0015e2480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0015e2480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00171dba0, 0xc0015e2480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bdd80, 0xc00017e1c0}, 0xc00171dba0, {0xc0018a4798, 0x11}, {0x0?, 0xc001733760?}, {0x551133?, 0x4a170f?}, {0xc001454100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00171dba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00171dba0, 0xc001a2c080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3065
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1734 [chan receive, 17 minutes]:
testing.(*T).Run(0xc001d0c9c0, {0x265b689?, 0x551133?}, 0x313e7c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001d0c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001d0c9c0, 0x313e5e8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3351 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613ebb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0017ed7a0?, 0xc0008323c2?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017ed7a0, {0xc0008323c2, 0x43e, 0x43e})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0016ee230, {0xc0008323c2?, 0x5383e0?, 0x22f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ee89f0, {0x3698960, 0xc0005124e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc001ee89f0}, {0x3698960, 0xc0005124e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0016ee230?, {0x3698aa0, 0xc001ee89f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0016ee230, {0x3698aa0, 0xc001ee89f0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc001ee89f0}, {0x36989c0, 0xc0016ee230}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001a2c000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3350
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3385 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613e3f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000b7d7a0?, 0xc0015adddb?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b7d7a0, {0xc0015adddb, 0x1c225, 0x1c225})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00012cb18, {0xc0015adddb?, 0xc001731d30?, 0x1fe7d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014d8150, {0x3698960, 0xc0016ee368})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc0014d8150}, {0x3698960, 0xc0016ee368}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00012cb18?, {0x3698aa0, 0xc0014d8150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00012cb18, {0x3698aa0, 0xc0014d8150})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc0014d8150}, {0x36989c0, 0xc00012cb18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001731fa8?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3383
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3059 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3058
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3353 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015e2300, 0xc001ba3680)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3350
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 860 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc0016e1d40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 858
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2854 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc00151df50, 0xc00151df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0xbc?, 0xc00151df50, 0xc00151df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x99b656?, 0xc0023e0f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00151dfd0?, 0x592e44?, 0xc0022dac00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2850
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3254 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001666990, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022e16e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016669c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b39aa0, {0x3699ec0, 0xc001d2fe60}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b39aa0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3215
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2723 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001730f50, 0xc001730f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x0?, 0xc001730f50, 0xc001730f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2686
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2853 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001666d50, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017e0e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001666d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b39e00, {0x3699ec0, 0xc0014d95c0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b39e00, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2850
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3214 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022e1800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3250
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3384 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613e208, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000b7d6e0?, 0xc001da23b2?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b7d6e0, {0xc001da23b2, 0x44e, 0x44e})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00012caf0, {0xc001da23b2?, 0x5383e0?, 0x215?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014d8120, {0x3698960, 0xc0013283d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc0014d8120}, {0x3698960, 0xc0013283d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00012caf0?, {0x3698aa0, 0xc0014d8120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00012caf0, {0x3698aa0, 0xc0014d8120})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc0014d8120}, {0x36989c0, 0xc00012caf0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00245e480?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3383
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2841 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00171c680, {0x2668a7e?, 0x60400000004?}, 0xc00245e700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00171c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00171c680, 0xc001a2c880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2074
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2607 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b2e590, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001364480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b2e5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e6a960, {0x3699ec0, 0xc001ee84b0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e6a960, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2536
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2724 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2723
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2457 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b31da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2887 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019098c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2685 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0022e14a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2669
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3357 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015e2480, 0xc001ba3800)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3354
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2355 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001374750, 0xc000b6df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0xe0?, 0xc001374750, 0xc001374798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0xc0007aaea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013747d0?, 0x592e44?, 0xc0007aeae0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2849 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017e0f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2848
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2686 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b2fe00, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2669
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2722 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b2fdd0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0022e1380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b2fe00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022278a0, {0x3699ec0, 0xc0007a9b00}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022278a0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2686
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2535 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001364600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2534
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2369 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2368
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2073 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0016deb60, 0x313e7c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1734
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2074 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0016ded00, {0x265cc34?, 0x0?}, 0xc001a2c880)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016ded00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016ded00, 0xc001e04100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2073
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2075 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc000614d20)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016deea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016deea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016deea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0016deea0, 0xc001e04140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2073
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2076 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0016df040, {0x265cc34?, 0x0?}, 0xc0001ac600)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016df040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016df040, 0xc001e04180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2073
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2077 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0016df1e0, {0x265cc34?, 0x0?}, 0xc001a2c200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016df1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016df1e0, 0xc001e041c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2073
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3166 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00171c4e0, {0x2668a7e?, 0x60400000004?}, 0xc00245e480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00171c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00171c4e0, 0xc0001ac600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2076
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2079 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0016df520, {0x265cc34?, 0x0?}, 0xc001a2c400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016df520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016df520, 0xc001e04280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2073
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2139 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00180ecc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2608 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf40, 0xc000060060}, 0xc001732f50, 0xc0014a1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf40, 0xc000060060}, 0x0?, 0xc001732f50, 0xc001732f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf40?, 0xc000060060?}, 0x100000000000000?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001e6a950?, 0xc001e142b0?, 0xc001732fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2536
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2609 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2608
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2354 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0009dcf50, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00180eba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009dd040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087f4d0, {0x3699ec0, 0xc001b72360}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087f4d0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3388 [IO wait]:
internal/poll.runtime_pollWait(0x7f404613e6e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0009ff6e0?, 0xc001f042df?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0009ff6e0, {0xc001f042df, 0x521, 0x521})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00012cc50, {0xc001f042df?, 0xc00133dd30?, 0x20a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014d8720, {0x3698960, 0xc000512cb0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3698aa0, 0xc0014d8720}, {0x3698960, 0xc000512cb0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00012cc50?, {0x3698aa0, 0xc0014d8720})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00012cc50, {0x3698aa0, 0xc0014d8720})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3698aa0, 0xc0014d8720}, {0x36989c0, 0xc00012cc50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001ba3b00?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3387
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3215 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016669c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3250
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3390 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018fea80, 0xc000061b60)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3387
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3383 [syscall, 6 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x603fa, 0xc001363ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001d36720)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001d36720)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0018fe900)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0018fe900)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000de340, 0xc0018fe900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bdd80, 0xc000428e00}, 0xc0000de340, {0xc001afc000, 0x1c}, {0x0?, 0xc001336f60?}, {0x551133?, 0x4a170f?}, {0xc000730600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000de340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000de340, 0xc00245e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3166
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3270 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3269
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3386 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0018fe900, 0xc0000618c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3383
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                    

Test pass (169/215)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 5.46
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 4.79
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 101.01
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 48.15
38 TestCertExpiration 269.29
40 TestForceSystemdFlag 60.93
41 TestForceSystemdEnv 71.41
43 TestKVMDriverInstallOrUpdate 1.25
47 TestErrorSpam/setup 43.08
48 TestErrorSpam/start 0.34
49 TestErrorSpam/status 0.75
50 TestErrorSpam/pause 1.58
51 TestErrorSpam/unpause 1.61
52 TestErrorSpam/stop 4.92
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 98.44
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 50.39
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
64 TestFunctional/serial/CacheCmd/cache/add_local 1.03
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 34.39
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.51
75 TestFunctional/serial/LogsFileCmd 1.49
76 TestFunctional/serial/InvalidService 4.04
78 TestFunctional/parallel/ConfigCmd 0.35
79 TestFunctional/parallel/DashboardCmd 15.74
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.15
82 TestFunctional/parallel/StatusCmd 1.04
86 TestFunctional/parallel/ServiceCmdConnect 12.57
87 TestFunctional/parallel/AddonsCmd 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 31.31
90 TestFunctional/parallel/SSHCmd 0.45
91 TestFunctional/parallel/CpCmd 1.53
92 TestFunctional/parallel/MySQL 25.05
93 TestFunctional/parallel/FileSync 0.24
94 TestFunctional/parallel/CertSync 1.43
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
102 TestFunctional/parallel/License 0.14
103 TestFunctional/parallel/ServiceCmd/DeployApp 20.21
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
105 TestFunctional/parallel/ProfileCmd/profile_list 0.35
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
107 TestFunctional/parallel/MountCmd/any-port 18.06
108 TestFunctional/parallel/MountCmd/specific-port 2.15
109 TestFunctional/parallel/ServiceCmd/List 0.49
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
112 TestFunctional/parallel/ServiceCmd/Format 0.41
113 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
114 TestFunctional/parallel/ServiceCmd/URL 0.34
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
128 TestFunctional/parallel/ImageCommands/ImageBuild 4.44
129 TestFunctional/parallel/ImageCommands/Setup 0.4
130 TestFunctional/parallel/Version/short 0.05
131 TestFunctional/parallel/Version/components 0.76
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.72
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.13
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.26
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.74
142 TestFunctional/delete_echo-server_images 0.04
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/StartCluster 208.48
149 TestMultiControlPlane/serial/DeployApp 4.83
150 TestMultiControlPlane/serial/PingHostFromPods 1.28
151 TestMultiControlPlane/serial/AddWorkerNode 58.28
152 TestMultiControlPlane/serial/NodeLabels 0.07
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
154 TestMultiControlPlane/serial/CopyFile 13.01
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
160 TestMultiControlPlane/serial/DeleteSecondaryNode 17.45
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
163 TestMultiControlPlane/serial/RestartCluster 480.99
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
165 TestMultiControlPlane/serial/AddSecondaryNode 76.39
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
170 TestJSONOutput/start/Command 97.29
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.71
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.62
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 7.4
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.2
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 89.43
202 TestMountStart/serial/StartWithMountFirst 24.35
203 TestMountStart/serial/VerifyMountFirst 0.38
204 TestMountStart/serial/StartWithMountSecond 26.99
205 TestMountStart/serial/VerifyMountSecond 0.39
206 TestMountStart/serial/DeleteFirst 0.87
207 TestMountStart/serial/VerifyMountPostDelete 0.38
208 TestMountStart/serial/Stop 1.28
209 TestMountStart/serial/RestartStopped 22.31
210 TestMountStart/serial/VerifyMountPostStop 0.38
213 TestMultiNode/serial/FreshStart2Nodes 123.1
214 TestMultiNode/serial/DeployApp2Nodes 3.8
215 TestMultiNode/serial/PingHostFrom2Pods 0.79
216 TestMultiNode/serial/AddNode 47.59
217 TestMultiNode/serial/MultiNodeLabels 0.06
218 TestMultiNode/serial/ProfileList 0.22
219 TestMultiNode/serial/CopyFile 7.32
220 TestMultiNode/serial/StopNode 2.29
221 TestMultiNode/serial/StartAfterStop 37.42
223 TestMultiNode/serial/DeleteNode 2.29
225 TestMultiNode/serial/RestartMultiNode 187.55
226 TestMultiNode/serial/ValidateNameConflict 45.59
233 TestScheduledStopUnix 115.78
237 TestRunningBinaryUpgrade 158.98
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
246 TestNoKubernetes/serial/StartWithK8s 93.1
255 TestStoppedBinaryUpgrade/Setup 0.5
256 TestStoppedBinaryUpgrade/Upgrade 152.31
257 TestNoKubernetes/serial/StartWithStopK8s 41.71
258 TestNoKubernetes/serial/Start 48.19
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
260 TestNoKubernetes/serial/ProfileList 6.71
261 TestNoKubernetes/serial/Stop 1.34
262 TestNoKubernetes/serial/StartNoArgs 32.55
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
273 TestPause/serial/Start 58.6
x
+
TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-255558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-255558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.535138228s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-255558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-255558: exit status 85 (59.131097ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-255558 | jenkins | v1.33.1 | 03 Aug 24 23:02 UTC |          |
	|         | -p download-only-255558        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:02:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:02:52.215185  331110 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:02:52.215749  331110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:52.215810  331110 out.go:304] Setting ErrFile to fd 2...
	I0803 23:02:52.215828  331110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:52.216396  331110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	W0803 23:02:52.216815  331110 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19370-323890/.minikube/config/config.json: open /home/jenkins/minikube-integration/19370-323890/.minikube/config/config.json: no such file or directory
	I0803 23:02:52.217473  331110 out.go:298] Setting JSON to true
	I0803 23:02:52.218429  331110 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27920,"bootTime":1722698252,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:02:52.218503  331110 start.go:139] virtualization: kvm guest
	I0803 23:02:52.220689  331110 out.go:97] [download-only-255558] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0803 23:02:52.220813  331110 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 23:02:52.220869  331110 notify.go:220] Checking for updates...
	I0803 23:02:52.221975  331110 out.go:169] MINIKUBE_LOCATION=19370
	I0803 23:02:52.223280  331110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:02:52.224596  331110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:02:52.225702  331110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:02:52.226865  331110 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0803 23:02:52.229103  331110 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 23:02:52.229326  331110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:02:52.266303  331110 out.go:97] Using the kvm2 driver based on user configuration
	I0803 23:02:52.266338  331110 start.go:297] selected driver: kvm2
	I0803 23:02:52.266348  331110 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:02:52.266711  331110 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:02:52.266815  331110 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:02:52.283642  331110 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:02:52.283700  331110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:02:52.284211  331110 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0803 23:02:52.284391  331110 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 23:02:52.284453  331110 cni.go:84] Creating CNI manager for ""
	I0803 23:02:52.284466  331110 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:02:52.284474  331110 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 23:02:52.284537  331110 start.go:340] cluster config:
	{Name:download-only-255558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-255558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:02:52.284722  331110 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:02:52.286431  331110 out.go:97] Downloading VM boot image ...
	I0803 23:02:52.286472  331110 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0803 23:02:55.962337  331110 out.go:97] Starting "download-only-255558" primary control-plane node in "download-only-255558" cluster
	I0803 23:02:55.962370  331110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 23:02:55.993971  331110 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0803 23:02:55.994014  331110 cache.go:56] Caching tarball of preloaded images
	I0803 23:02:55.994194  331110 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0803 23:02:55.996102  331110 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 23:02:55.996135  331110 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:02:56.028656  331110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-255558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-255558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-255558
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (5.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-698360 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-698360 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.458706353s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (5.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-698360
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-698360: exit status 85 (60.916688ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-255558 | jenkins | v1.33.1 | 03 Aug 24 23:02 UTC |                     |
	|         | -p download-only-255558        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| delete  | -p download-only-255558        | download-only-255558 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| start   | -o=json --download-only        | download-only-698360 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC |                     |
	|         | -p download-only-698360        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:03:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:03:01.082415  331302 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:03:01.082688  331302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:01.082698  331302 out.go:304] Setting ErrFile to fd 2...
	I0803 23:03:01.082702  331302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:01.082871  331302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:03:01.083447  331302 out.go:298] Setting JSON to true
	I0803 23:03:01.084376  331302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27929,"bootTime":1722698252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:03:01.084443  331302 start.go:139] virtualization: kvm guest
	I0803 23:03:01.086319  331302 out.go:97] [download-only-698360] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:03:01.086486  331302 notify.go:220] Checking for updates...
	I0803 23:03:01.088020  331302 out.go:169] MINIKUBE_LOCATION=19370
	I0803 23:03:01.089280  331302 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:03:01.090600  331302 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:03:01.091837  331302 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:03:01.092886  331302 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0803 23:03:01.094963  331302 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 23:03:01.095188  331302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:03:01.127810  331302 out.go:97] Using the kvm2 driver based on user configuration
	I0803 23:03:01.127842  331302 start.go:297] selected driver: kvm2
	I0803 23:03:01.127852  331302 start.go:901] validating driver "kvm2" against <nil>
	I0803 23:03:01.128201  331302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:03:01.128293  331302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19370-323890/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0803 23:03:01.144936  331302 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0803 23:03:01.144998  331302 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 23:03:01.145486  331302 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0803 23:03:01.145666  331302 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 23:03:01.145739  331302 cni.go:84] Creating CNI manager for ""
	I0803 23:03:01.145752  331302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0803 23:03:01.145763  331302 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0803 23:03:01.145824  331302 start.go:340] cluster config:
	{Name:download-only-698360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-698360 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:03:01.145925  331302 iso.go:125] acquiring lock: {Name:mke18ad45a0f8e7ec51adacd21c7952f17bf0357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:03:01.147344  331302 out.go:97] Starting "download-only-698360" primary control-plane node in "download-only-698360" cluster
	I0803 23:03:01.147362  331302 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:03:01.174639  331302 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:03:01.174676  331302 cache.go:56] Caching tarball of preloaded images
	I0803 23:03:01.174863  331302 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:03:01.176618  331302 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0803 23:03:01.176648  331302 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:03:01.210263  331302 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0803 23:03:05.138800  331302 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:03:05.138904  331302 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19370-323890/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0803 23:03:05.919729  331302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0803 23:03:05.920109  331302 profile.go:143] Saving config to /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/download-only-698360/config.json ...
	I0803 23:03:05.920144  331302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/download-only-698360/config.json: {Name:mkae06824e30d5136cca446edc4351786a923c78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:03:05.920312  331302 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0803 23:03:05.920441  331302 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19370-323890/.minikube/cache/linux/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-698360 host does not exist
	  To start a cluster, run: "minikube start -p download-only-698360"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-698360
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-550718 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-550718 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.794457845s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-550718
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-550718: exit status 85 (62.434256ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-255558 | jenkins | v1.33.1 | 03 Aug 24 23:02 UTC |                     |
	|         | -p download-only-255558           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| delete  | -p download-only-255558           | download-only-255558 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| start   | -o=json --download-only           | download-only-698360 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC |                     |
	|         | -p download-only-698360           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| delete  | -p download-only-698360           | download-only-698360 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC | 03 Aug 24 23:03 UTC |
	| start   | -o=json --download-only           | download-only-550718 | jenkins | v1.33.1 | 03 Aug 24 23:03 UTC |                     |
	|         | -p download-only-550718           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:03:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:03:06.874619  331493 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:03:06.875064  331493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:06.875166  331493 out.go:304] Setting ErrFile to fd 2...
	I0803 23:03:06.875189  331493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:03:06.875711  331493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:03:06.876751  331493 out.go:298] Setting JSON to true
	I0803 23:03:06.877682  331493 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27935,"bootTime":1722698252,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:03:06.877749  331493 start.go:139] virtualization: kvm guest
	I0803 23:03:06.879588  331493 out.go:97] [download-only-550718] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:03:06.879774  331493 notify.go:220] Checking for updates...
	I0803 23:03:06.881028  331493 out.go:169] MINIKUBE_LOCATION=19370
	I0803 23:03:06.882557  331493 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:03:06.884091  331493 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:03:06.885520  331493 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:03:06.886739  331493 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-550718 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550718"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-550718
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-464113 --alsologtostderr --binary-mirror http://127.0.0.1:45747 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-464113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-464113
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (101.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-404249 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-404249 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.462431557s)
helpers_test.go:175: Cleaning up "offline-crio-404249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-404249
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-404249: (1.54605837s)
--- PASS: TestOffline (101.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-033173
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-033173: exit status 85 (49.048147ms)

                                                
                                                
-- stdout --
	* Profile "addons-033173" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033173"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-033173
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-033173: exit status 85 (48.379265ms)

                                                
                                                
-- stdout --
	* Profile "addons-033173" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033173"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (48.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-941979 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-941979 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.694298632s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-941979 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-941979 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-941979 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-941979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-941979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-941979: (1.009031244s)
--- PASS: TestCertOptions (48.15s)

                                                
                                    
x
+
TestCertExpiration (269.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-443385 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-443385 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.457520977s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-443385 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-443385 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (21.780118666s)
helpers_test.go:175: Cleaning up "cert-expiration-443385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-443385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-443385: (1.047790697s)
--- PASS: TestCertExpiration (269.29s)

                                                
                                    
x
+
TestForceSystemdFlag (60.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-040288 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-040288 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.702941228s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-040288 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-040288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-040288
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-040288: (1.02426264s)
--- PASS: TestForceSystemdFlag (60.93s)

                                                
                                    
x
+
TestForceSystemdEnv (71.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-439963 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-439963 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.398746896s)
helpers_test.go:175: Cleaning up "force-systemd-env-439963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-439963
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-439963: (1.007181765s)
--- PASS: TestForceSystemdEnv (71.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                    
x
+
TestErrorSpam/setup (43.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-428878 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-428878 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-428878 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-428878 --driver=kvm2  --container-runtime=crio: (43.082993101s)
--- PASS: TestErrorSpam/setup (43.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (4.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop: (2.275286521s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop: (1.488542514s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-428878 --log_dir /tmp/nospam-428878 stop: (1.152193547s)
--- PASS: TestErrorSpam/stop (4.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19370-323890/.minikube/files/etc/test/nested/copy/331097/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-189533 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.438724303s)
--- PASS: TestFunctional/serial/StartWithProxy (98.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-189533 --alsologtostderr -v=8: (50.386894605s)
functional_test.go:659: soft start took 50.388055612s for "functional-189533" cluster.
--- PASS: TestFunctional/serial/SoftStart (50.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-189533 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:3.1: (1.06387596s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:3.3: (1.091922608s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 cache add registry.k8s.io/pause:latest: (1.164413594s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-189533 /tmp/TestFunctionalserialCacheCmdcacheadd_local4079663017/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache add minikube-local-cache-test:functional-189533
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache delete minikube-local-cache-test:functional-189533
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-189533
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.363359ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 kubectl -- --context functional-189533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-189533 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-189533 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.389697791s)
functional_test.go:757: restart took 34.389808908s for "functional-189533" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-189533 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 logs: (1.509898474s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 logs --file /tmp/TestFunctionalserialLogsFileCmd2038455086/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 logs --file /tmp/TestFunctionalserialLogsFileCmd2038455086/001/logs.txt: (1.493733278s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-189533 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-189533
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-189533: exit status 115 (289.822547ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.143:31125 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-189533 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 config get cpus: exit status 14 (52.945618ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 config get cpus: exit status 14 (55.397346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-189533 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-189533 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 345323: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-189533 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.808444ms)

                                                
                                                
-- stdout --
	* [functional-189533] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:47:48.677255  344878 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:47:48.677612  344878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:47:48.677625  344878 out.go:304] Setting ErrFile to fd 2...
	I0803 23:47:48.677631  344878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:47:48.677931  344878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:47:48.678757  344878 out.go:298] Setting JSON to false
	I0803 23:47:48.680259  344878 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30617,"bootTime":1722698252,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:47:48.680361  344878 start.go:139] virtualization: kvm guest
	I0803 23:47:48.682646  344878 out.go:177] * [functional-189533] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0803 23:47:48.683926  344878 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:47:48.683939  344878 notify.go:220] Checking for updates...
	I0803 23:47:48.686491  344878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:47:48.687704  344878 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:47:48.688872  344878 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:47:48.689998  344878 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:47:48.691057  344878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:47:48.692588  344878 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:47:48.693032  344878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:47:48.693081  344878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:47:48.709191  344878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0803 23:47:48.709736  344878 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:47:48.710354  344878 main.go:141] libmachine: Using API Version  1
	I0803 23:47:48.710385  344878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:47:48.710798  344878 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:47:48.711067  344878 main.go:141] libmachine: (functional-189533) Calling .DriverName
	I0803 23:47:48.711354  344878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:47:48.711703  344878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:47:48.711755  344878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:47:48.727275  344878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37707
	I0803 23:47:48.727818  344878 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:47:48.728354  344878 main.go:141] libmachine: Using API Version  1
	I0803 23:47:48.728380  344878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:47:48.728696  344878 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:47:48.728896  344878 main.go:141] libmachine: (functional-189533) Calling .DriverName
	I0803 23:47:48.764150  344878 out.go:177] * Using the kvm2 driver based on existing profile
	I0803 23:47:48.765440  344878 start.go:297] selected driver: kvm2
	I0803 23:47:48.765456  344878 start.go:901] validating driver "kvm2" against &{Name:functional-189533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-189533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:47:48.765615  344878 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:47:48.767802  344878 out.go:177] 
	W0803 23:47:48.768847  344878 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0803 23:47:48.769947  344878 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-189533 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-189533 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.000459ms)

                                                
                                                
-- stdout --
	* [functional-189533] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:47:48.964862  344942 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:47:48.965544  344942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:47:48.965559  344942 out.go:304] Setting ErrFile to fd 2...
	I0803 23:47:48.965566  344942 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:47:48.965943  344942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0803 23:47:48.966517  344942 out.go:298] Setting JSON to false
	I0803 23:47:48.967752  344942 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30617,"bootTime":1722698252,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0803 23:47:48.967836  344942 start.go:139] virtualization: kvm guest
	I0803 23:47:48.970143  344942 out.go:177] * [functional-189533] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0803 23:47:48.971811  344942 out.go:177]   - MINIKUBE_LOCATION=19370
	I0803 23:47:48.971882  344942 notify.go:220] Checking for updates...
	I0803 23:47:48.974085  344942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:47:48.975290  344942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	I0803 23:47:48.976886  344942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	I0803 23:47:48.978044  344942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0803 23:47:48.979225  344942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:47:48.980795  344942 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0803 23:47:48.981285  344942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:47:48.981338  344942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:47:48.997448  344942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0803 23:47:48.997956  344942 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:47:48.998671  344942 main.go:141] libmachine: Using API Version  1
	I0803 23:47:48.998694  344942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:47:48.999172  344942 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:47:48.999471  344942 main.go:141] libmachine: (functional-189533) Calling .DriverName
	I0803 23:47:48.999806  344942 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:47:49.000262  344942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0803 23:47:49.000325  344942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0803 23:47:49.017753  344942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0803 23:47:49.018231  344942 main.go:141] libmachine: () Calling .GetVersion
	I0803 23:47:49.018754  344942 main.go:141] libmachine: Using API Version  1
	I0803 23:47:49.018772  344942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0803 23:47:49.019093  344942 main.go:141] libmachine: () Calling .GetMachineName
	I0803 23:47:49.019270  344942 main.go:141] libmachine: (functional-189533) Calling .DriverName
	I0803 23:47:49.054598  344942 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0803 23:47:49.055876  344942 start.go:297] selected driver: kvm2
	I0803 23:47:49.055890  344942 start.go:901] validating driver "kvm2" against &{Name:functional-189533 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-189533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:47:49.056024  344942 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:47:49.057998  344942 out.go:177] 
	W0803 23:47:49.059125  344942 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 23:47:49.060349  344942 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-189533 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-189533 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-4gjzt" [5cf1b821-e60f-4fd1-823e-be56f0caea59] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-4gjzt" [5cf1b821-e60f-4fd1-823e-be56f0caea59] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004372501s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.143:32037
functional_test.go:1671: http://192.168.39.143:32037: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-4gjzt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.143:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.143:32037
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [916d529d-22e4-4c22-aa47-6f98970860c1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007366717s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-189533 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-189533 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-189533 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-189533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [482dec14-072e-4955-b30f-0f8f30e2bfff] Pending
helpers_test.go:344: "sp-pod" [482dec14-072e-4955-b30f-0f8f30e2bfff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [482dec14-072e-4955-b30f-0f8f30e2bfff] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004035489s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-189533 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-189533 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-189533 delete -f testdata/storage-provisioner/pod.yaml: (1.407952518s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-189533 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e21c50a0-ba25-4f56-826f-80fe97b5542c] Pending
helpers_test.go:344: "sp-pod" [e21c50a0-ba25-4f56-826f-80fe97b5542c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e21c50a0-ba25-4f56-826f-80fe97b5542c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004903388s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-189533 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh -n functional-189533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cp functional-189533:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1569460999/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh -n functional-189533 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh -n functional-189533 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-189533 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-z4dld" [69d866fe-01ff-4d59-8a1e-5fcb0bc2b44e] Pending
helpers_test.go:344: "mysql-64454c8b5c-z4dld" [69d866fe-01ff-4d59-8a1e-5fcb0bc2b44e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-z4dld" [69d866fe-01ff-4d59-8a1e-5fcb0bc2b44e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.006905591s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-189533 exec mysql-64454c8b5c-z4dld -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-189533 exec mysql-64454c8b5c-z4dld -- mysql -ppassword -e "show databases;": exit status 1 (315.685015ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-189533 exec mysql-64454c8b5c-z4dld -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-189533 exec mysql-64454c8b5c-z4dld -- mysql -ppassword -e "show databases;": exit status 1 (180.039517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-189533 exec mysql-64454c8b5c-z4dld -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/331097/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /etc/test/nested/copy/331097/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/331097.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /etc/ssl/certs/331097.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/331097.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /usr/share/ca-certificates/331097.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3310972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /etc/ssl/certs/3310972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3310972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /usr/share/ca-certificates/3310972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-189533 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "sudo systemctl is-active docker": exit status 1 (220.525239ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "sudo systemctl is-active containerd": exit status 1 (235.554782ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-189533 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-189533 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-v7ppv" [beff1486-a4ba-4b13-9ccb-2ee2f9b1bdbf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-v7ppv" [beff1486-a4ba-4b13-9ccb-2ee2f9b1bdbf] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.003374134s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "295.106742ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "54.09573ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "253.408588ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "50.144701ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdany-port488928377/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722728847687815240" to /tmp/TestFunctionalparallelMountCmdany-port488928377/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722728847687815240" to /tmp/TestFunctionalparallelMountCmdany-port488928377/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722728847687815240" to /tmp/TestFunctionalparallelMountCmdany-port488928377/001/test-1722728847687815240
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.43547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  3 23:47 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  3 23:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  3 23:47 test-1722728847687815240
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh cat /mount-9p/test-1722728847687815240
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-189533 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [25d27b18-c83e-480c-a53b-421193601057] Pending
helpers_test.go:344: "busybox-mount" [25d27b18-c83e-480c-a53b-421193601057] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [25d27b18-c83e-480c-a53b-421193601057] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [25d27b18-c83e-480c-a53b-421193601057] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.004930633s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-189533 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdany-port488928377/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdspecific-port580633692/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.283025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdspecific-port580633692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "sudo umount -f /mount-9p": exit status 1 (254.954824ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-189533 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdspecific-port580633692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service list -o json
functional_test.go:1490: Took "435.342524ms" to run "out/minikube-linux-amd64 -p functional-189533 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.143:32325
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T" /mount1: exit status 1 (293.613995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-189533 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-189533 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2423451752/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.143:32325
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-189533 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-189533
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-189533
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-189533 image ls --format short --alsologtostderr:
I0803 23:47:57.523008  345747 out.go:291] Setting OutFile to fd 1 ...
I0803 23:47:57.523309  345747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.523324  345747 out.go:304] Setting ErrFile to fd 2...
I0803 23:47:57.523331  345747 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.523600  345747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
I0803 23:47:57.524274  345747 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.524374  345747 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.524730  345747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.524779  345747 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.544576  345747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
I0803 23:47:57.545213  345747 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.545945  345747 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.545976  345747 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.546457  345747 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.546715  345747 main.go:141] libmachine: (functional-189533) Calling .GetState
I0803 23:47:57.549930  345747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.549980  345747 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.565681  345747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
I0803 23:47:57.566090  345747 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.566505  345747 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.566518  345747 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.566859  345747 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.566986  345747 main.go:141] libmachine: (functional-189533) Calling .DriverName
I0803 23:47:57.567113  345747 ssh_runner.go:195] Run: systemctl --version
I0803 23:47:57.567136  345747 main.go:141] libmachine: (functional-189533) Calling .GetSSHHostname
I0803 23:47:57.570128  345747 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.570481  345747 main.go:141] libmachine: (functional-189533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:79:96", ip: ""} in network mk-functional-189533: {Iface:virbr1 ExpiryTime:2024-08-04 00:44:21 +0000 UTC Type:0 Mac:52:54:00:66:79:96 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-189533 Clientid:01:52:54:00:66:79:96}
I0803 23:47:57.570507  345747 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined IP address 192.168.39.143 and MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.570816  345747 main.go:141] libmachine: (functional-189533) Calling .GetSSHPort
I0803 23:47:57.570986  345747 main.go:141] libmachine: (functional-189533) Calling .GetSSHKeyPath
I0803 23:47:57.571096  345747 main.go:141] libmachine: (functional-189533) Calling .GetSSHUsername
I0803 23:47:57.571220  345747 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/functional-189533/id_rsa Username:docker}
I0803 23:47:57.684713  345747 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:47:57.814724  345747 main.go:141] libmachine: Making call to close driver server
I0803 23:47:57.814743  345747 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:57.815107  345747 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:57.815129  345747 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:57.815139  345747 main.go:141] libmachine: Making call to close driver server
I0803 23:47:57.815147  345747 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:57.817041  345747 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:57.817061  345747 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-189533 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-189533  | 912db3b480366 | 3.33kB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/kicbase/echo-server           | functional-189533  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-189533 image ls --format table --alsologtostderr:
I0803 23:47:58.200492  345883 out.go:291] Setting OutFile to fd 1 ...
I0803 23:47:58.200621  345883 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:58.200631  345883 out.go:304] Setting ErrFile to fd 2...
I0803 23:47:58.200636  345883 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:58.200810  345883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
I0803 23:47:58.201372  345883 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:58.201466  345883 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:58.201882  345883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:58.201935  345883 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:58.218132  345883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36755
I0803 23:47:58.218641  345883 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:58.219236  345883 main.go:141] libmachine: Using API Version  1
I0803 23:47:58.219267  345883 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:58.219672  345883 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:58.219920  345883 main.go:141] libmachine: (functional-189533) Calling .GetState
I0803 23:47:58.221796  345883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:58.221849  345883 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:58.237632  345883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
I0803 23:47:58.238104  345883 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:58.238597  345883 main.go:141] libmachine: Using API Version  1
I0803 23:47:58.238625  345883 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:58.239000  345883 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:58.239193  345883 main.go:141] libmachine: (functional-189533) Calling .DriverName
I0803 23:47:58.239434  345883 ssh_runner.go:195] Run: systemctl --version
I0803 23:47:58.239473  345883 main.go:141] libmachine: (functional-189533) Calling .GetSSHHostname
I0803 23:47:58.242157  345883 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:58.242537  345883 main.go:141] libmachine: (functional-189533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:79:96", ip: ""} in network mk-functional-189533: {Iface:virbr1 ExpiryTime:2024-08-04 00:44:21 +0000 UTC Type:0 Mac:52:54:00:66:79:96 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-189533 Clientid:01:52:54:00:66:79:96}
I0803 23:47:58.242565  345883 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined IP address 192.168.39.143 and MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:58.242776  345883 main.go:141] libmachine: (functional-189533) Calling .GetSSHPort
I0803 23:47:58.242994  345883 main.go:141] libmachine: (functional-189533) Calling .GetSSHKeyPath
I0803 23:47:58.243179  345883 main.go:141] libmachine: (functional-189533) Calling .GetSSHUsername
I0803 23:47:58.243318  345883 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/functional-189533/id_rsa Username:docker}
I0803 23:47:58.353844  345883 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:47:58.410326  345883 main.go:141] libmachine: Making call to close driver server
I0803 23:47:58.410344  345883 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:58.410632  345883 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:58.410649  345883 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:58.410664  345883 main.go:141] libmachine: Making call to close driver server
I0803 23:47:58.410672  345883 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:58.410903  345883 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:58.410917  345883 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:58.410934  345883 main.go:141] libmachine: (functional-189533) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-189533 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-189533"],"size":"4943877"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scra
per@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests
":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/core
dns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d
2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"912db3b4803663d0ccc308f0e2a272e1484c7b5393cf97877aa9c35301067ebc","repoDigests":["localhost/minikube-local-cache-test@sha256:81e105a65e2683518ebadf89d50e0ba24453d65037c9e48a8f7754d8fec1a8a0"],"repoTags":["localhost/minikube-local-cache-test:functional-189533"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.
k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kub
e-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-189533 image ls --format json --alsologtostderr:
I0803 23:47:57.879750  345820 out.go:291] Setting OutFile to fd 1 ...
I0803 23:47:57.879943  345820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.879954  345820 out.go:304] Setting ErrFile to fd 2...
I0803 23:47:57.879961  345820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.880220  345820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
I0803 23:47:57.880972  345820 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.881122  345820 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.881648  345820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.881724  345820 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.900163  345820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43991
I0803 23:47:57.900817  345820 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.901573  345820 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.901600  345820 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.901953  345820 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.902153  345820 main.go:141] libmachine: (functional-189533) Calling .GetState
I0803 23:47:57.904116  345820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.904166  345820 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.923292  345820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
I0803 23:47:57.923873  345820 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.924513  345820 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.924545  345820 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.924969  345820 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.925217  345820 main.go:141] libmachine: (functional-189533) Calling .DriverName
I0803 23:47:57.925486  345820 ssh_runner.go:195] Run: systemctl --version
I0803 23:47:57.925533  345820 main.go:141] libmachine: (functional-189533) Calling .GetSSHHostname
I0803 23:47:57.928784  345820 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.929141  345820 main.go:141] libmachine: (functional-189533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:79:96", ip: ""} in network mk-functional-189533: {Iface:virbr1 ExpiryTime:2024-08-04 00:44:21 +0000 UTC Type:0 Mac:52:54:00:66:79:96 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-189533 Clientid:01:52:54:00:66:79:96}
I0803 23:47:57.929173  345820 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined IP address 192.168.39.143 and MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.929341  345820 main.go:141] libmachine: (functional-189533) Calling .GetSSHPort
I0803 23:47:57.929541  345820 main.go:141] libmachine: (functional-189533) Calling .GetSSHKeyPath
I0803 23:47:57.929711  345820 main.go:141] libmachine: (functional-189533) Calling .GetSSHUsername
I0803 23:47:57.929823  345820 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/functional-189533/id_rsa Username:docker}
I0803 23:47:58.077848  345820 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:47:58.146908  345820 main.go:141] libmachine: Making call to close driver server
I0803 23:47:58.146926  345820 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:58.147256  345820 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:58.147277  345820 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:58.147291  345820 main.go:141] libmachine: Making call to close driver server
I0803 23:47:58.147301  345820 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:58.147529  345820 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:58.147553  345820 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:58.147556  345820 main.go:141] libmachine: (functional-189533) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-189533 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-189533
size: "4943877"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 912db3b4803663d0ccc308f0e2a272e1484c7b5393cf97877aa9c35301067ebc
repoDigests:
- localhost/minikube-local-cache-test@sha256:81e105a65e2683518ebadf89d50e0ba24453d65037c9e48a8f7754d8fec1a8a0
repoTags:
- localhost/minikube-local-cache-test:functional-189533
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-189533 image ls --format yaml --alsologtostderr:
I0803 23:47:57.525731  345748 out.go:291] Setting OutFile to fd 1 ...
I0803 23:47:57.525867  345748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.525878  345748 out.go:304] Setting ErrFile to fd 2...
I0803 23:47:57.525884  345748 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:57.526167  345748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
I0803 23:47:57.526925  345748 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.527084  345748 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:57.527641  345748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.527719  345748 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.543263  345748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42785
I0803 23:47:57.543761  345748 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.544613  345748 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.544639  345748 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.545068  345748 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.545363  345748 main.go:141] libmachine: (functional-189533) Calling .GetState
I0803 23:47:57.548074  345748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:57.548127  345748 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:57.564447  345748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
I0803 23:47:57.565188  345748 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:57.565773  345748 main.go:141] libmachine: Using API Version  1
I0803 23:47:57.565830  345748 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:57.566217  345748 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:57.566412  345748 main.go:141] libmachine: (functional-189533) Calling .DriverName
I0803 23:47:57.566653  345748 ssh_runner.go:195] Run: systemctl --version
I0803 23:47:57.566701  345748 main.go:141] libmachine: (functional-189533) Calling .GetSSHHostname
I0803 23:47:57.569890  345748 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.570301  345748 main.go:141] libmachine: (functional-189533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:79:96", ip: ""} in network mk-functional-189533: {Iface:virbr1 ExpiryTime:2024-08-04 00:44:21 +0000 UTC Type:0 Mac:52:54:00:66:79:96 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-189533 Clientid:01:52:54:00:66:79:96}
I0803 23:47:57.570479  345748 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined IP address 192.168.39.143 and MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:57.570640  345748 main.go:141] libmachine: (functional-189533) Calling .GetSSHPort
I0803 23:47:57.570854  345748 main.go:141] libmachine: (functional-189533) Calling .GetSSHKeyPath
I0803 23:47:57.571005  345748 main.go:141] libmachine: (functional-189533) Calling .GetSSHUsername
I0803 23:47:57.571154  345748 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/functional-189533/id_rsa Username:docker}
I0803 23:47:57.679885  345748 ssh_runner.go:195] Run: sudo crictl images --output json
I0803 23:47:57.752886  345748 main.go:141] libmachine: Making call to close driver server
I0803 23:47:57.752899  345748 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:57.753257  345748 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:57.753280  345748 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:47:57.753297  345748 main.go:141] libmachine: Making call to close driver server
I0803 23:47:57.753299  345748 main.go:141] libmachine: (functional-189533) DBG | Closing plugin on server side
I0803 23:47:57.753307  345748 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:47:57.753641  345748 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:47:57.753669  345748 main.go:141] libmachine: (functional-189533) DBG | Closing plugin on server side
I0803 23:47:57.753658  345748 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-189533 ssh pgrep buildkitd: exit status 1 (312.29508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image build -t localhost/my-image:functional-189533 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 image build -t localhost/my-image:functional-189533 testdata/build --alsologtostderr: (3.914535533s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-189533 image build -t localhost/my-image:functional-189533 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bc43ee0bf41
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-189533
--> 7b5cfcb34e7
Successfully tagged localhost/my-image:functional-189533
7b5cfcb34e70bc870a9caadbbb6dfcd6a14d615b578883f248d52cbe4b6402dd
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-189533 image build -t localhost/my-image:functional-189533 testdata/build --alsologtostderr:
I0803 23:47:58.118815  345859 out.go:291] Setting OutFile to fd 1 ...
I0803 23:47:58.119005  345859 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:58.119019  345859 out.go:304] Setting ErrFile to fd 2...
I0803 23:47:58.119026  345859 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:47:58.119351  345859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
I0803 23:47:58.120283  345859 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:58.120941  345859 config.go:182] Loaded profile config "functional-189533": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0803 23:47:58.121353  345859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:58.121396  345859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:58.136977  345859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42955
I0803 23:47:58.137485  345859 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:58.138234  345859 main.go:141] libmachine: Using API Version  1
I0803 23:47:58.138260  345859 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:58.138631  345859 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:58.138845  345859 main.go:141] libmachine: (functional-189533) Calling .GetState
I0803 23:47:58.140997  345859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0803 23:47:58.141050  345859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0803 23:47:58.158499  345859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41057
I0803 23:47:58.158997  345859 main.go:141] libmachine: () Calling .GetVersion
I0803 23:47:58.159486  345859 main.go:141] libmachine: Using API Version  1
I0803 23:47:58.159518  345859 main.go:141] libmachine: () Calling .SetConfigRaw
I0803 23:47:58.159864  345859 main.go:141] libmachine: () Calling .GetMachineName
I0803 23:47:58.160048  345859 main.go:141] libmachine: (functional-189533) Calling .DriverName
I0803 23:47:58.160280  345859 ssh_runner.go:195] Run: systemctl --version
I0803 23:47:58.160305  345859 main.go:141] libmachine: (functional-189533) Calling .GetSSHHostname
I0803 23:47:58.163325  345859 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:58.163698  345859 main.go:141] libmachine: (functional-189533) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:79:96", ip: ""} in network mk-functional-189533: {Iface:virbr1 ExpiryTime:2024-08-04 00:44:21 +0000 UTC Type:0 Mac:52:54:00:66:79:96 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:functional-189533 Clientid:01:52:54:00:66:79:96}
I0803 23:47:58.163732  345859 main.go:141] libmachine: (functional-189533) DBG | domain functional-189533 has defined IP address 192.168.39.143 and MAC address 52:54:00:66:79:96 in network mk-functional-189533
I0803 23:47:58.163917  345859 main.go:141] libmachine: (functional-189533) Calling .GetSSHPort
I0803 23:47:58.164112  345859 main.go:141] libmachine: (functional-189533) Calling .GetSSHKeyPath
I0803 23:47:58.164257  345859 main.go:141] libmachine: (functional-189533) Calling .GetSSHUsername
I0803 23:47:58.164390  345859 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/functional-189533/id_rsa Username:docker}
I0803 23:47:58.325039  345859 build_images.go:161] Building image from path: /tmp/build.1042590539.tar
I0803 23:47:58.325104  345859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0803 23:47:58.346220  345859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1042590539.tar
I0803 23:47:58.358884  345859 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1042590539.tar: stat -c "%s %y" /var/lib/minikube/build/build.1042590539.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1042590539.tar': No such file or directory
I0803 23:47:58.358917  345859 ssh_runner.go:362] scp /tmp/build.1042590539.tar --> /var/lib/minikube/build/build.1042590539.tar (3072 bytes)
I0803 23:47:58.431867  345859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1042590539
I0803 23:47:58.446601  345859 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1042590539 -xf /var/lib/minikube/build/build.1042590539.tar
I0803 23:47:58.462418  345859 crio.go:315] Building image: /var/lib/minikube/build/build.1042590539
I0803 23:47:58.462506  345859 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-189533 /var/lib/minikube/build/build.1042590539 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0803 23:48:01.956339  345859 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-189533 /var/lib/minikube/build/build.1042590539 --cgroup-manager=cgroupfs: (3.493797351s)
I0803 23:48:01.956457  345859 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1042590539
I0803 23:48:01.969177  345859 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1042590539.tar
I0803 23:48:01.980640  345859 build_images.go:217] Built localhost/my-image:functional-189533 from /tmp/build.1042590539.tar
I0803 23:48:01.980682  345859 build_images.go:133] succeeded building to: functional-189533
I0803 23:48:01.980686  345859 build_images.go:134] failed building to: 
I0803 23:48:01.980713  345859 main.go:141] libmachine: Making call to close driver server
I0803 23:48:01.980723  345859 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:48:01.981069  345859 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:48:01.981094  345859 main.go:141] libmachine: Making call to close connection to plugin binary
I0803 23:48:01.981098  345859 main.go:141] libmachine: (functional-189533) DBG | Closing plugin on server side
I0803 23:48:01.981103  345859 main.go:141] libmachine: Making call to close driver server
I0803 23:48:01.981115  345859 main.go:141] libmachine: (functional-189533) Calling .Close
I0803 23:48:01.981424  345859 main.go:141] libmachine: Successfully made call to close driver server
I0803 23:48:01.981439  345859 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
2024/08/03 23:48:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-189533
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image load --daemon docker.io/kicbase/echo-server:functional-189533 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 image load --daemon docker.io/kicbase/echo-server:functional-189533 --alsologtostderr: (1.500177021s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image load --daemon docker.io/kicbase/echo-server:functional-189533 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-189533
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image load --daemon docker.io/kicbase/echo-server:functional-189533 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image save docker.io/kicbase/echo-server:functional-189533 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 image save docker.io/kicbase/echo-server:functional-189533 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.126850335s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image rm docker.io/kicbase/echo-server:functional-189533 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-189533 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.012509103s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-189533
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-189533 image save --daemon docker.io/kicbase/echo-server:functional-189533 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-189533
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-189533
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-189533
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-189533
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-349588 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-349588 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.806761478s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (208.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-349588 -- rollout status deployment/busybox: (2.593182467s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-4mwk4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-mlkx9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-szvhv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-4mwk4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-mlkx9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-szvhv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-4mwk4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-mlkx9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-szvhv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-4mwk4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-4mwk4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-mlkx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-mlkx9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-szvhv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-349588 -- exec busybox-fc5497c4f-szvhv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-349588 -v=7 --alsologtostderr
E0803 23:52:24.416913  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.422743  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.432990  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.453278  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.493607  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.574732  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:24.735167  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:25.055776  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:25.696071  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:26.976957  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:29.537855  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0803 23:52:34.658845  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-349588 -v=7 --alsologtostderr: (57.428168966s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-349588 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp testdata/cp-test.txt ha-349588:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588:/home/docker/cp-test.txt ha-349588-m02:/home/docker/cp-test_ha-349588_ha-349588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test.txt"
E0803 23:52:44.899704  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test_ha-349588_ha-349588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588:/home/docker/cp-test.txt ha-349588-m03:/home/docker/cp-test_ha-349588_ha-349588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test_ha-349588_ha-349588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588:/home/docker/cp-test.txt ha-349588-m04:/home/docker/cp-test_ha-349588_ha-349588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test_ha-349588_ha-349588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp testdata/cp-test.txt ha-349588-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m02:/home/docker/cp-test.txt ha-349588:/home/docker/cp-test_ha-349588-m02_ha-349588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test_ha-349588-m02_ha-349588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m02:/home/docker/cp-test.txt ha-349588-m03:/home/docker/cp-test_ha-349588-m02_ha-349588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test_ha-349588-m02_ha-349588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m02:/home/docker/cp-test.txt ha-349588-m04:/home/docker/cp-test_ha-349588-m02_ha-349588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test_ha-349588-m02_ha-349588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp testdata/cp-test.txt ha-349588-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt ha-349588:/home/docker/cp-test_ha-349588-m03_ha-349588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test_ha-349588-m03_ha-349588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt ha-349588-m02:/home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test_ha-349588-m03_ha-349588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m03:/home/docker/cp-test.txt ha-349588-m04:/home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test_ha-349588-m03_ha-349588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp testdata/cp-test.txt ha-349588-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1280567125/001/cp-test_ha-349588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt ha-349588:/home/docker/cp-test_ha-349588-m04_ha-349588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588 "sudo cat /home/docker/cp-test_ha-349588-m04_ha-349588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt ha-349588-m02:/home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m02 "sudo cat /home/docker/cp-test_ha-349588-m04_ha-349588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 cp ha-349588-m04:/home/docker/cp-test.txt ha-349588-m03:/home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 ssh -n ha-349588-m03 "sudo cat /home/docker/cp-test_ha-349588-m04_ha-349588-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.490083021s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-349588 node delete m03 -v=7 --alsologtostderr: (16.631568419s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (480.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-349588 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0804 00:07:24.417400  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 00:08:47.463930  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
E0804 00:12:24.417002  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-349588 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (8m0.162862177s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (480.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-349588 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-349588 --control-plane -v=7 --alsologtostderr: (1m15.513473147s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-349588 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-579995 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-579995 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.284734328s)
--- PASS: TestJSONOutput/start/Command (97.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-579995 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-579995 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-579995 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-579995 --output=json --user=testUser: (7.396135892s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-780577 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-780577 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.799843ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"06526b63-578e-4afe-be06-1ac1f53e6d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-780577] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a28c42d-12f9-4baa-8f53-af91e25c6c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"10da1749-80e0-45a5-8ab0-5a2e96ea8596","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7df00cbe-3045-4fed-ba8e-47a9ef1a9468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig"}}
	{"specversion":"1.0","id":"d4bd7893-9fab-444a-b169-24efce5af867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube"}}
	{"specversion":"1.0","id":"08c08f92-3580-4d83-bc5f-af9936f4fe14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d3335725-8571-47ca-8927-c74d4ed02384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6e834c52-804c-4dd5-996c-5556dc5969a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-780577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-780577
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-281552 --driver=kvm2  --container-runtime=crio
E0804 00:17:24.417780  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-281552 --driver=kvm2  --container-runtime=crio: (42.983927906s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-284512 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-284512 --driver=kvm2  --container-runtime=crio: (43.774131825s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-281552
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-284512
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-284512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-284512
helpers_test.go:175: Cleaning up "first-281552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-281552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-281552: (1.000290005s)
--- PASS: TestMinikubeProfile (89.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-495108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-495108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.348400187s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-495108 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-495108 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.985792504s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-495108 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-513211
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-513211: (1.280266691s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513211
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513211: (21.310310151s)
--- PASS: TestMountStart/serial/RestartStopped (22.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513211 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453015 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453015 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m2.677705656s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-453015 -- rollout status deployment/busybox: (2.26154428s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-8sw6h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-qcrhw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-8sw6h -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-qcrhw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-8sw6h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-qcrhw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-8sw6h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-8sw6h -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-qcrhw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-453015 -- exec busybox-fc5497c4f-qcrhw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453015 -v 3 --alsologtostderr
E0804 00:22:24.416801  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-453015 -v 3 --alsologtostderr: (47.015947321s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-453015 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp testdata/cp-test.txt multinode-453015:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015:/home/docker/cp-test.txt multinode-453015-m02:/home/docker/cp-test_multinode-453015_multinode-453015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test_multinode-453015_multinode-453015-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015:/home/docker/cp-test.txt multinode-453015-m03:/home/docker/cp-test_multinode-453015_multinode-453015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test_multinode-453015_multinode-453015-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp testdata/cp-test.txt multinode-453015-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt multinode-453015:/home/docker/cp-test_multinode-453015-m02_multinode-453015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test_multinode-453015-m02_multinode-453015.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m02:/home/docker/cp-test.txt multinode-453015-m03:/home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test_multinode-453015-m02_multinode-453015-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp testdata/cp-test.txt multinode-453015-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2291356066/001/cp-test_multinode-453015-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt multinode-453015:/home/docker/cp-test_multinode-453015-m03_multinode-453015.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015 "sudo cat /home/docker/cp-test_multinode-453015-m03_multinode-453015.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 cp multinode-453015-m03:/home/docker/cp-test.txt multinode-453015-m02:/home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 ssh -n multinode-453015-m02 "sudo cat /home/docker/cp-test_multinode-453015-m03_multinode-453015-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-453015 node stop m03: (1.436374452s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453015 status: exit status 7 (429.836241ms)

                                                
                                                
-- stdout --
	multinode-453015
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453015-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453015-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr: exit status 7 (420.10041ms)

                                                
                                                
-- stdout --
	multinode-453015
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-453015-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-453015-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:22:44.708155  364282 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:22:44.708281  364282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:22:44.708291  364282 out.go:304] Setting ErrFile to fd 2...
	I0804 00:22:44.708295  364282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:22:44.708506  364282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19370-323890/.minikube/bin
	I0804 00:22:44.708720  364282 out.go:298] Setting JSON to false
	I0804 00:22:44.708764  364282 mustload.go:65] Loading cluster: multinode-453015
	I0804 00:22:44.708884  364282 notify.go:220] Checking for updates...
	I0804 00:22:44.709201  364282 config.go:182] Loaded profile config "multinode-453015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:22:44.709219  364282 status.go:255] checking status of multinode-453015 ...
	I0804 00:22:44.709649  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.709708  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.726177  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0804 00:22:44.726700  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.727433  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.727459  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.727887  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.728102  364282 main.go:141] libmachine: (multinode-453015) Calling .GetState
	I0804 00:22:44.730028  364282 status.go:330] multinode-453015 host status = "Running" (err=<nil>)
	I0804 00:22:44.730051  364282 host.go:66] Checking if "multinode-453015" exists ...
	I0804 00:22:44.730384  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.730435  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.747147  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38941
	I0804 00:22:44.747598  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.748145  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.748169  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.748497  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.748729  364282 main.go:141] libmachine: (multinode-453015) Calling .GetIP
	I0804 00:22:44.751354  364282 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:22:44.751693  364282 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:22:44.751727  364282 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:22:44.751812  364282 host.go:66] Checking if "multinode-453015" exists ...
	I0804 00:22:44.752132  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.752178  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.767815  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0804 00:22:44.768234  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.768703  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.768725  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.769052  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.769253  364282 main.go:141] libmachine: (multinode-453015) Calling .DriverName
	I0804 00:22:44.769412  364282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:22:44.769439  364282 main.go:141] libmachine: (multinode-453015) Calling .GetSSHHostname
	I0804 00:22:44.772205  364282 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:22:44.772641  364282 main.go:141] libmachine: (multinode-453015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:e7:22", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:19:54 +0000 UTC Type:0 Mac:52:54:00:e0:e7:22 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:multinode-453015 Clientid:01:52:54:00:e0:e7:22}
	I0804 00:22:44.772668  364282 main.go:141] libmachine: (multinode-453015) DBG | domain multinode-453015 has defined IP address 192.168.39.23 and MAC address 52:54:00:e0:e7:22 in network mk-multinode-453015
	I0804 00:22:44.772776  364282 main.go:141] libmachine: (multinode-453015) Calling .GetSSHPort
	I0804 00:22:44.772944  364282 main.go:141] libmachine: (multinode-453015) Calling .GetSSHKeyPath
	I0804 00:22:44.773085  364282 main.go:141] libmachine: (multinode-453015) Calling .GetSSHUsername
	I0804 00:22:44.773250  364282 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015/id_rsa Username:docker}
	I0804 00:22:44.853344  364282 ssh_runner.go:195] Run: systemctl --version
	I0804 00:22:44.859490  364282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:22:44.873904  364282 kubeconfig.go:125] found "multinode-453015" server: "https://192.168.39.23:8443"
	I0804 00:22:44.873936  364282 api_server.go:166] Checking apiserver status ...
	I0804 00:22:44.873971  364282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 00:22:44.887859  364282 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0804 00:22:44.897528  364282 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 00:22:44.897587  364282 ssh_runner.go:195] Run: ls
	I0804 00:22:44.902603  364282 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I0804 00:22:44.906551  364282 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I0804 00:22:44.906574  364282 status.go:422] multinode-453015 apiserver status = Running (err=<nil>)
	I0804 00:22:44.906585  364282 status.go:257] multinode-453015 status: &{Name:multinode-453015 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:22:44.906621  364282 status.go:255] checking status of multinode-453015-m02 ...
	I0804 00:22:44.906935  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.906971  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.922774  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0804 00:22:44.923221  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.923775  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.923804  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.924144  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.924338  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetState
	I0804 00:22:44.926240  364282 status.go:330] multinode-453015-m02 host status = "Running" (err=<nil>)
	I0804 00:22:44.926263  364282 host.go:66] Checking if "multinode-453015-m02" exists ...
	I0804 00:22:44.926584  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.926638  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.942538  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0804 00:22:44.942963  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.943400  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.943422  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.943760  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.943940  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetIP
	I0804 00:22:44.946741  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | domain multinode-453015-m02 has defined MAC address 52:54:00:f6:c1:50 in network mk-multinode-453015
	I0804 00:22:44.947167  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c1:50", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:21:08 +0000 UTC Type:0 Mac:52:54:00:f6:c1:50 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-453015-m02 Clientid:01:52:54:00:f6:c1:50}
	I0804 00:22:44.947196  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | domain multinode-453015-m02 has defined IP address 192.168.39.217 and MAC address 52:54:00:f6:c1:50 in network mk-multinode-453015
	I0804 00:22:44.947299  364282 host.go:66] Checking if "multinode-453015-m02" exists ...
	I0804 00:22:44.947736  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:44.947787  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:44.963659  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0804 00:22:44.964127  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:44.964620  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:44.964641  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:44.964913  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:44.965057  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .DriverName
	I0804 00:22:44.965195  364282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 00:22:44.965212  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetSSHHostname
	I0804 00:22:44.968359  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | domain multinode-453015-m02 has defined MAC address 52:54:00:f6:c1:50 in network mk-multinode-453015
	I0804 00:22:44.968806  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c1:50", ip: ""} in network mk-multinode-453015: {Iface:virbr1 ExpiryTime:2024-08-04 01:21:08 +0000 UTC Type:0 Mac:52:54:00:f6:c1:50 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-453015-m02 Clientid:01:52:54:00:f6:c1:50}
	I0804 00:22:44.968838  364282 main.go:141] libmachine: (multinode-453015-m02) DBG | domain multinode-453015-m02 has defined IP address 192.168.39.217 and MAC address 52:54:00:f6:c1:50 in network mk-multinode-453015
	I0804 00:22:44.968992  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetSSHPort
	I0804 00:22:44.969191  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetSSHKeyPath
	I0804 00:22:44.969327  364282 main.go:141] libmachine: (multinode-453015-m02) Calling .GetSSHUsername
	I0804 00:22:44.969461  364282 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19370-323890/.minikube/machines/multinode-453015-m02/id_rsa Username:docker}
	I0804 00:22:45.048842  364282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 00:22:45.063269  364282 status.go:257] multinode-453015-m02 status: &{Name:multinode-453015-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0804 00:22:45.063323  364282 status.go:255] checking status of multinode-453015-m03 ...
	I0804 00:22:45.063750  364282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:22:45.063795  364282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:22:45.080050  364282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0804 00:22:45.080507  364282 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:22:45.081038  364282 main.go:141] libmachine: Using API Version  1
	I0804 00:22:45.081062  364282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:22:45.081429  364282 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:22:45.081657  364282 main.go:141] libmachine: (multinode-453015-m03) Calling .GetState
	I0804 00:22:45.083376  364282 status.go:330] multinode-453015-m03 host status = "Stopped" (err=<nil>)
	I0804 00:22:45.083395  364282 status.go:343] host is not running, skipping remaining checks
	I0804 00:22:45.083403  364282 status.go:257] multinode-453015-m03 status: &{Name:multinode-453015-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-453015 node start m03 -v=7 --alsologtostderr: (36.786336082s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-453015 node delete m03: (1.756900399s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (187.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453015 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0804 00:32:24.417642  331097 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19370-323890/.minikube/profiles/functional-189533/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453015 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m7.010300192s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-453015 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (187.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-453015
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453015-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-453015-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.748704ms)

                                                
                                                
-- stdout --
	* [multinode-453015-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-453015-m02' is duplicated with machine name 'multinode-453015-m02' in profile 'multinode-453015'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-453015-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-453015-m03 --driver=kvm2  --container-runtime=crio: (44.244299967s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-453015
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-453015: exit status 80 (224.58832ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-453015 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-453015-m03 already exists in multinode-453015-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-453015-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-453015-m03: (1.014814058s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.59s)

                                                
                                    
x
+
TestScheduledStopUnix (115.78s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-959714 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-959714 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.175182531s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959714 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-959714 -n scheduled-stop-959714
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959714 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959714 -n scheduled-stop-959714
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959714
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-959714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-959714
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-959714: exit status 7 (65.104251ms)

                                                
                                                
-- stdout --
	scheduled-stop-959714
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959714 -n scheduled-stop-959714
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-959714 -n scheduled-stop-959714: exit status 7 (66.091165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-959714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-959714
--- PASS: TestScheduledStopUnix (115.78s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2503583897 start -p running-upgrade-380850 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2503583897 start -p running-upgrade-380850 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m21.971299438s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-380850 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-380850 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.11615438s)
helpers_test.go:175: Cleaning up "running-upgrade-380850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-380850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-380850: (1.39268104s)
--- PASS: TestRunningBinaryUpgrade (158.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (87.471724ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-419151] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19370
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19370-323890/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19370-323890/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419151 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419151 --driver=kvm2  --container-runtime=crio: (1m32.856666963s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-419151 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4210855585 start -p stopped-upgrade-742754 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4210855585 start -p stopped-upgrade-742754 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m19.906736204s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4210855585 -p stopped-upgrade-742754 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4210855585 -p stopped-upgrade-742754 stop: (1.467951641s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-742754 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-742754 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.932992729s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.38107761s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-419151 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-419151 status -o json: exit status 2 (272.817835ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-419151","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-419151
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-419151: (1.057252921s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419151 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.187893327s)
--- PASS: TestNoKubernetes/serial/Start (48.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-419151 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-419151 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.045755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.641016081s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.066402452s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-419151
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-419151: (1.343797991s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-419151 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-419151 --driver=kvm2  --container-runtime=crio: (32.545826541s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-419151 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-419151 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.09036ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-742754
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestPause/serial/Start (58.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-026475 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-026475 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (58.597144016s)
--- PASS: TestPause/serial/Start (58.60s)

                                                
                                    

Test skip (35/215)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard